🎉 🎉 RAPIDMINER 9.10 IS OUT!!! 🎉🎉
Download the latest version helping analytics teams accelerate time-to-value for streaming and IIOT use cases.
Examining cause for mislabeling
I have completed a project and have an output for a decision tree predicting 4 classes. One of the classes I have a class precision of 99% then 69%, 66% and finally only 54% for my last class. What I want to do now is go through the validated example set outputted from the decision tree and examine which labels where incorrectly labeled, and what attributes are influencing the mislabeling. Is there any process in rapidminer that would be useful for this or would it be more of a manual job?
Thanks in advance,