Options

"Iterative learning (with Neural Net)"

AnimalFriendAnimalFriend Member Posts: 1 Contributor I
edited June 2019 in Help
Hello everybody,

I want to build a neural net classifier that can "learn from its errors", ie. when I find that a classification made was wrong, I want to train the Net again with the input vector it misclassified.

I tried to use the UpdateModel operator from RM 5.0 for that, but it gives me a "Model not update-able"  error. Is my approach wrong?

Ideally, I would like to create something like an "Active Learner" that chooses a verification set by itself and has a person verify predictions it made. Is something like this achievable with RM?

I also found a statement from Mr. Mierswa http://rapid-i.com/rapidforum/index.php/topic,688.msg2702.html#msg2702 that
neural networks are finally a bit outdated by now
Which classifier would you recommend?


Kind regards
  Animal

Answers

  • Options
    landland RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 2,531 Unicorn
    Hi,
    currently only two of our learners produce updateable models: NaiveBayes and NearestNeighbours. If you want to retrain a wrong classified example with other learners, you could increase it's weight and use one learner of the much larger class of learners taking example weights into account.
    Which learner is most suitable for your data depends mainly on your data and a little bit on your target: Must the model be understandable by humans or is it just used for the best classification? Each model has it's own pros and cons, you simply have to try it on your data using a validation like X-Validation.
    If neural net's are outdated right now is an ongoing discussion. From my perspective you can produce fairly good results with neural nets, if you tune the net structure, activation function and other parameters enough. But I personally experienced that at least one other algorithm delivered comparable good results without that need of so much tuning and much faster training. Of course all of them lack this word "neural" what sounds like magic...
    When searching for the best fitting algorithm, I would start with the fast and easy ones like NaiveBayes and NearestNeighbors. Then switching to RDA and DecisionTrees and finally heading for SVMs and NeuralNets. And all the time you have to optimize their parameters at least superficial.

    Greetings,
      Sebastian
Sign In or Register to comment.