NOTE: IF YOU WISH TO REPORT A NEW BUG, PLEASE POST A NEW QUESTION AND TAG AS "BUG REPORT". THANK YOU.

Bug: Learning rate of Neural Network, during parameters optimization

AlphaPiAlphaPi Member Posts: 10 Contributor II
Hello everyone, 

I am trying to optimize the parameters of the Neural Network algorithm. So, using the Optimize Parameters(Grid) operator, i select the Neural Net.learning_rate, from 0.01 to 1, with 10 steps. I Run the Process and the error message is the following: 

Process failed abnormally
Ooops. Seems like you have found a bug. Please report it in our community at https://community.rapidminer.com. Reason: Cannot reset network to a smaller learning rate. 

Is there any solution about this?

Thanks in advance!
0
0 votes

Needs Info · Last Updated

Comments

  • varunm1varunm1 Moderator, Member Posts: 1,207 Unicorn
    Hello @AlphaPi

    Did you try setting the "Decay" parameter on? Generally, this will resolve your error.

    Do let us know if this doesn't work.
    Regards,
    Varun
    https://www.varunmandalapu.com/

    Be Safe. Follow precautions and Maintain Social Distancing

  • AlphaPiAlphaPi Member Posts: 10 Contributor II
    Hello @varunm1,

    Thank you for answering!

    I just tried it. I 've got the same error message. Any other suggestions? 

    BR
  • varunm1varunm1 Moderator, Member Posts: 1,207 Unicorn
    Hmm, I remember this resolved my issue earlier. If this doesn't, I guess @pschlunder or @IngoRM might help you.
    Regards,
    Varun
    https://www.varunmandalapu.com/

    Be Safe. Follow precautions and Maintain Social Distancing

  • jacobcybulskijacobcybulski Member, University Professor Posts: 391 Unicorn
    I have seen this error several times, it is a bug in the neural network Maths, I think to do with floating point operations. Somewhere within your network you generate very small or very large weights. To fix this you need to start using regularization that a simple neural net does not support. The best way out is to swap the Neural Network with a Deep Learning model. Otherwise try stopping the learning process earlier, e.g. by reducing the training cycles, increasing the epsilon, or reducing the number of layers.
  • sgenzersgenzer Administrator, Moderator, Employee, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM Moderator Posts: 2,959 Community Manager
    hi @AlphaPi can you pls post your process XML and sample data set so we can replicate the issue?

    Scott
Sign In or Register to comment.