RapidMiner 9.8 Beta is now available
Be one of the first to get your hands on the new features. More details and downloads here:
Best way to spot examples in testing set that receive a wrong classification?
With Decision Tree alone, I have about 64% correct prediction for the testing set. With Bayesian boosting, I have about 79% correct prediction. In the result section, I can see a green-colored column indicative of the prediction for the target attribute for all 486 examples.
My question are:
1. Is there a reason that the predictions shown are for all examples, rather than for the testing examples only?
2. what's the best way to spot and isolate the examples that are incorrectly predicted?