RapidMiner 9.8 Beta is now available

Be one of the first to get your hands on the new features. More details and downloads here:


Automodel feedback : Debate about the models training

lionelderkrikorlionelderkrikor Moderator, RapidMiner Certified Analyst, Member Posts: 1,087   Unicorn
edited June 2019 in Help
Dear all,

I wanted friendly and humbly open a debate about the training method of the models in RapidMiner's Auto Model.
In deed, from what I understood of the "data science methodology", after evaluating and selecting the "best" model, this one has to be (re)trained with the whole initial dataset before entering in production.
This principle is also applied by the Split Validation operator : The model delivered by RapidMiner is trained with the whole input dataset (independently of the split ratio).
BUT, this is not the case in Auto Model, the model(s) provided / made available by RapidMiner's Auto Model is (are) trained with only 60 % of the input dataset.
My first question is : Is it always relevant to (re)train the selected model with the whole input dataset ?
if yes and if it is feasible  , it is maybe a good idea to implement this principle in Auto Model.(I think of users (no data-scientists /beginners) who do not want to ask questions and who just want a model to go into production...)
But maybe for a computation time constraint, (or another technical reason) it is not feasible to (re)train all the models with the whole initial dataset ?
In this case (not feasible), it is maybe a good idea to advise the user in Auto Model (in the documentation and/or in the  overview of the results and/or in the "model" menus of the differents models) to (re)train manually, by generating the process of the selected model, before it enters in production...

To conclude, I hope I helped advance the debate and I hope to have your opinion on these topics.

Have a nice day,




  • varunm1varunm1 Moderator, Member Posts: 1,204   Unicorn
    Hello @lionelderkrikor

    Thanks for starting on this, I do have a question regarding this,

    1. Automodel has heavy feature engineering (model based) before training the model. So, this happens only on 60 % of dataset right now as the remaining 40% is for testing. My question is if we train model again on complete data, aren't the features selected be impacted by the 40% of data and change model dynamics? 


    Be Safe. Follow precautions and Maintain Social Distancing

  • sgenzersgenzer 12Administrator, Moderator, Employee, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM Moderator Posts: 2,954  Community Manager
    Solution Accepted
    Thank you @lionelderkrikor for this. All really good points. I am, of course, passing this to the Auto Model-er himself @IngoRM as he is the one best to participate in this discussion from our side. :smile:


  • IngoRMIngoRM Administrator, Moderator, Employee, RapidMiner Certified Analyst, RapidMiner Certified Expert, Community Manager, RMResearcher, Member, University Professor Posts: 1,749  RM Founder
    edited June 2019
    Thanks, your thoughts are much appreciated!  I think we probably will agree that for putting a model into production, one which was trained on the full data would be best.  However, I don't think that I made a good job when I posted my earlier question since...

    If you used cross validation, this difference would go away.

    ...is actally not the case (more below).  The problem of potential user confusion would be the same.  In fact, my believe that many users (and please keep in mind that many / most users have much less experience than you and I) will be confused is exactly coming from the fact that many people ask things like "which model is produced by cross validation".

    So let me try to explain better what problem I want to solve.  And please let's not drag ourselves into in a "pro/contra" cross validation discussion - this is an independent topic.  In that spirit, I will try to create a better explanation by actually using a cross validation example for the issue ;)

    Let's say we take a sample of 100 rows from Titanic Training (in the Sample folder).  We then perform a cross-validation and deliver the model created on all data.  Here is the model:

    I have highlighted one particular branch in the model.  If I know check the data set with all predictions, I get the following (sorted for gender and age):

    If you compare the highlighted data points with the highlighted branch in the model, the predictions are "wrong".  We of course do understand why that is the case, so that's not my point / the problem I want to solve.

    I am just looking for a good way how we can help less experienced users understand the difference and where it is coming from - hence my question.  In the split validation like it is done by Auto Model, that confusion does not exist since the created predictions and the model have a 1:1 relationship.  But with cross validation or in general with any delivered model that is trained on a different / full data set, that will happen.

    One direction I am exploring right now is to actually show TWO models in the AM results: the one which is used to calculate the error rates and predictions (like the one we have today in AM) and then a second one called "Production Model" or something like this.  Then at least the difference is explicit and can be documented.  I would hate to have a UI paradigm with some implicit assumptions which most users would not understand - this is surefire recipe for a bad user experience.

    Hope I did a better job this time explaining the potential confusion I see with the "full model" idea and please let me know if any of you have any additional thoughts...


    P.S.: Here is the process I have used to create the model and predictions above:
    <?xml version="1.0" encoding="UTF-8"?><process version="9.4.000-SNAPSHOT">
  • Telcontar120Telcontar120 Moderator, RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 1,530   Unicorn
    @IngoRM Thanks for the additional explanation.  I was focused on the difference between the delivered models in split vs cross validation output, not on the reported predictions from the test set.  And indeed you are perfectly correct (as of course you already know, but for the benefit of anyone else reading this post understanding that we are in agreement) that the predictions delivered from the "test" output from cross-validation would not be consistent with those generated from the full model delivered from that same operator.
    So having re-focused the issue in the way you have described, then I concur that the best solution is probably to present two models and associated output in AM, one for validation purposes and one for production purposes.

    Brian T.
    Lindon Ventures 
    Data Science Consulting from Certified RapidMiner Experts
  • sgenzersgenzer 12Administrator, Moderator, Employee, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM Moderator Posts: 2,954  Community Manager
    as the moderator FYI I'm just changing this to a 'discussion' instead of a 'question' for organizational purposes :smile:
  • IngoRMIngoRM Administrator, Moderator, Employee, RapidMiner Certified Analyst, RapidMiner Certified Expert, Community Manager, RMResearcher, Member, University Professor Posts: 1,749  RM Founder
    edited June 2019
    Thanks for the feedback.  I also get more and more convinced that two models in the results are the way to go then.  So I am happy to confirm that we WILL do the following changes in future releases of Auto Model (most likely to start with 9.4):
    1. We will also build the ready-to-deploy production model on the full data set.
    2. We will show two results: Model (which is the validated one) and Production Model (which is rebuilt on the complete data).  This also allows to inspection / comparison to identify potential robustness problems.
    3. The processes will also be completely redesigned (see sneak peek below) which should help with understandability and management of those processes.
    4. Additional scoring processes using the production model and all necessary preprocessing steps will be automatically created for you if you save the results.
    We will, however, NOT change the validation method. I know that this is disappointing for some, but please see my comments on the rationale here: https://community.rapidminer.com/discussion/55736/cross-validation-or-split-validation

    The production model is independent on the validation and the results of the hybrid approach (3) in the discussion linked above are absolutely comparable with those of cross-validations in almost all situations.  But the additional runtimes - but potentially even more important the lack of maintainability for processes of that complexiy - do not make a change feasible.

    I gave this long thoughts and experimented a lot, but I am 100% convinced that this is the best way forward.  Hope you folks understand.


Sign In or Register to comment.