The RapidMiner community is on read-only mode until further notice. Technical support via cases will continue to work as is. For any urgent licensing related requests from Students/Faculty members, please use the Altair academic forum here.

# Newbie question: Deriving formulae for Neural Nets

SlightReturn
Member Posts:

**1**Learner III
Forgive a question from an amateur enthusiast programmer that may be very stupid.

I have successfully trained a simple Neural Network model in RapidMiner for a Regression problem. I want to derive the model formulae in order to replicate its prediction calculation in another programming environment (Visual Basic).

My network has 10 input variables (+ 1 'threshold' node), 1 hidden layer with 7 nodes (+ 1 'threshold' node), and 1 output variable. All input variables were pre-normalised to the [-1,1] range. The output variable was not normalised and has values outside this range. "Results view" of the model therefore shows 7 x hidden nodes with a Sigmoid activation function and 1 x output node with a Linear activation function, and gives weights for all 8 nodes

I am therefore assuming that the activation function for hidden node i is:

And the final output node value is

Is this correct? If so, could anyone please suggest why my model outputs using these formulae do not match the predictions that Rapidminer produces when applying this model to the original test dataset? Are my formulae wrong, or does Rapidminer undertake further post-processing of model output that I am missing?

Once again, I am very sorry to ask this question but I would be really grateful for help, as I am seriously stuck on this.

I have successfully trained a simple Neural Network model in RapidMiner for a Regression problem. I want to derive the model formulae in order to replicate its prediction calculation in another programming environment (Visual Basic).

My network has 10 input variables (+ 1 'threshold' node), 1 hidden layer with 7 nodes (+ 1 'threshold' node), and 1 output variable. All input variables were pre-normalised to the [-1,1] range. The output variable was not normalised and has values outside this range. "Results view" of the model therefore shows 7 x hidden nodes with a Sigmoid activation function and 1 x output node with a Linear activation function, and gives weights for all 8 nodes

I am therefore assuming that the activation function for hidden node i is:

*1 / ( 1 + exp ( - (*

(Input_variable_#1_value * Input_variable_#1_node_i_weight) +

...

(Input_variable_#10_value * Input_variable_#10_node_i_weight) +

Node_i_bias_value

) ) )(Input_variable_#1_value * Input_variable_#1_node_i_weight) +

...

(Input_variable_#10_value * Input_variable_#10_node_i_weight) +

Node_i_bias_value

) ) )

And the final output node value is

*(Hidden_node_#1_value * Hidden_node_#1_output_weight) +*

...

(Hidden_node_#7_value * Hidden_node_#7_output_weight) +

Output_threshold_value...

(Hidden_node_#7_value * Hidden_node_#7_output_weight) +

Output_threshold_value

Is this correct? If so, could anyone please suggest why my model outputs using these formulae do not match the predictions that Rapidminer produces when applying this model to the original test dataset? Are my formulae wrong, or does Rapidminer undertake further post-processing of model output that I am missing?

Once again, I am very sorry to ask this question but I would be really grateful for help, as I am seriously stuck on this.

0

## Answers

1,869UnicornI don't know the implementation details of the Neural Net, so I can't give you a difenite answer. You are however free to download the source code of RapidMiner from our website and have a look at the Neural Net implementation. The class is called NeuralNetLearner. To find out the class name for any operator first find it in OperatorsCoreDocumentation.xml under the same name as displayed in RapidMiner, remember the key, and find the class name for that key in OperatorsCore.xml.

Best,

Marius