Regarding Deep Learning Operator (H2O) and Extension (DL4j) in RM

varunm1varunm1 Moderator, Member Posts: 1,207 Unicorn
edited June 2019 in Help
Hi,

I am working with the deep learning operator based on H2O and the Deep Learning extension based on DL4j. I am looking if these operators can have adaptable epochs embedded in them. This is something like the number of epochs that need to be trained can be determined automatically based on validation set accuracy(performance). For example, if the accuracy of the validation set is increasing then the model can train more epochs until it starts reducing. As in deep learning, if we have fewer epochs the model is undertrained and with more epochs there is an issue with overfitting, to select the optimal number of epochs do we have any options in this present operator or extension?

One more question is, I see in the deep learning operator(H2O) If I have two layers why is it needed to specify dropout for both layers? Is it not possible to specify dropout rates for a single layer? Based on the dropout paper by Hinton, I see the suggestion is to add for every fully connected layer but is it mandatory.

Thanks 
Varun
Regards,
Varun
https://www.varunmandalapu.com/

Be Safe. Follow precautions and Maintain Social Distancing

Answers

  • sgenzersgenzer Administrator, Moderator, Employee, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM Moderator Posts: 2,959 Community Manager
  • User36964User36964 Member, University Professor Posts: 15 University Professor
    edited April 2019
    1- As far as I know there is no special operator that optimize the parameters such as epoch. But you can use optimize parameters operator with a log operator.

    2- If you have 2 hidden layers you can set the drop out raio as 0.35, 0.50. That means the first hidden layer with 0.35 drop out and secaond layer with 0.5 drop out. But you have to specify a ratio for each layer
Sign In or Register to comment.