🎉 🎉 RAPIDMINER 9.10 IS OUT!!! 🎉🎉
Download the latest version helping analytics teams accelerate time-to-value for streaming and IIOT use cases.
GBT confidence value vs. prediction
I am currently experimenting a bit with gradient boosted trees for some classification tasks. It became one of my favorite models a while ago, due to speed and good prediction results and I thought I really understood what was going on. However, in my recent experiments I regularly get an output on a class prediction that looks like the image below:
There are two classes (YES and NO) and the prediction accuracy is overall quite good. Nevertheless, what irritates me is that by looking at the confidence levels for the two classes, I would expect a different prediction for some examples. In the example above the confidence for a YES prediction is 39.3%, whereas for NO it is 60.7%. Why does the model then predict (correctly) an outcome of YES?
Any insights on or references to how the confidence levels are calculated in GBTs would be highly appreciated.