afaik in Neural Networks the basic learning algorithm is backpropagation of the squared error + adjusting the weights according to dE/dw. This so called "naive backpropagation" is quite inefficient, since it takes forever to converge to the optimal configuration in many cases.
Now my question is how exactly (backpropagation) learning is implemented in RapidMiner. I guess I have to take a look into the Joona documentation...
Answers
what do you mean by learning enhancements? Sorry, but I'm not too familiar with the slang around Neural Networks.
Greetings,
Sebastian
afaik in Neural Networks the basic learning algorithm is backpropagation of the squared error + adjusting the weights according to dE/dw. This so called "naive backpropagation" is quite inefficient, since it takes forever to converge to the optimal configuration in many cases.
Now my question is how exactly (backpropagation) learning is implemented in RapidMiner. I guess I have to take a look into the Joona documentation...
Kind Regards
Theo
either that or I could ask Ingo who implemented the operator. But currently he's on the CeBit, so that's not an option for now.
Greetings,
Sebastian