Due to recent updates, all users are required to create an Altair One account to login to the RapidMiner community. Click the Register button to create your account using the same email that you have previously used to login to the RapidMiner community. This will ensure that any previously created content will be synced to your Altair One account. Once you login, you will be asked to provide a username that identifies you to other Community users. Email us at Community with questions.
Apply Model Problem?
I've a curious behavior in RapidMiner.
I have a process that trains a model, saves the model, and then tests the model on some test data. I have another process that is identically the same, except that it skips the training step and simply reads in the saved model before applying it to the test data.
The two processes produce different results, i.e., the performance on identical test data is different depending upon whether I'm using the model right out of the modeling operator or if I'm using the saved model.
The model I'm using is a Vote Model with three sub-models, a Neural Net, an SVM and a W-SMOreg, if that's relevant.
Any idea why the behavior should be different? Is there a loss of accuracy when writing & then reading the model? My differences seem fairly dramatic for that to be the cause, but I suppose it is possible.
Thanks for any help!
Scott Turner
I have a process that trains a model, saves the model, and then tests the model on some test data. I have another process that is identically the same, except that it skips the training step and simply reads in the saved model before applying it to the test data.
The two processes produce different results, i.e., the performance on identical test data is different depending upon whether I'm using the model right out of the modeling operator or if I'm using the saved model.
The model I'm using is a Vote Model with three sub-models, a Neural Net, an SVM and a W-SMOreg, if that's relevant.
Any idea why the behavior should be different? Is there a loss of accuracy when writing & then reading the model? My differences seem fairly dramatic for that to be the cause, but I suppose it is possible.
Thanks for any help!
Scott Turner
0
Answers
that's really annoying. Can you show me the process? I would like to fix that behavior.
Greetings,
Sebastian
Any insight to whats causing this would be appreciated.
thanks, I will send you my mail by pm.
Greetings,
Sebastian
I have exactly the same problem. Besides writing and applying the model what failed, I have also tried to rebuild the process by recalling the before saved weights. The attributes were then applied by a classifier. The performance differed to the performance of the initial Feature Selection Process performance on the testset altough the data was identical.
Can you help me or can I send you my files?
Thanks in advance
Daniel