At some point the behaviour of these operators has changed in more recent versions of Rapidminer. I am losing repeatability. Of all the learners I tested, these are the ones that did not give me the same results between RM7.1 and RM8+. At least on my data. The results are the same if repeated on each version of Rapidminer independently so the process is not introducing any randomness. Unfortunately, I get consistently better results from the versions in 7.1 than in newer versions. I am assuming that I could create an extension for these older operators. Is there a better way?
Solved! Go to Solution.
cant you just change the compatibility level of the operator?
I think the compatibility level only gives me the option of 7.4. I did look at the source code at some point and I think the change occured from 7.11 to 7.4. I will have to check this again. In the mean time, I will try 7.4 and see if that makes a difference. It had slipped my mind to check that so thanks for reminding me. I hope that does the job!
Thanks for the explantion Jan. I am going to have to try some different data. At the moment in 8.2, RDA with alpha =1 which should give me LDA gives me different results than the LDA operator. If I run a grid for alpha between 0 and 1 with RDA in 0.1 increments, I get the same results for all values of alpha which suggests that RDA is still not taking alpha into consideration.
I will recheck my old processes with the idea that QDA and LDA were switched and get back to you if there is still some confusion.
Hi Jan, your explanation helped a lot. RDA alpha is working properly on a different data set. In my old processes, the old RDA was ignoring alpha. This explains the differences I was seeing.