Compete in RapidMiner's 3rd Competition: Fantasy Football. Top prize is $750. Deadline December 19.
Download RapidMiner Studio or Server 8.0 Public Beta. Let us know how you like it! Ends November 27.
Watch RapidMiner's "Getting Started" videos on YouTube. Everything you need to do data science - fast and simple!
I am currently experimenting a bit with gradient boosted trees for some classification tasks. It became one of my favorite models a while ago, due to speed and good prediction results and I thought I really understood what was going on. However, in my recent experiments I regularly get an output on a class prediction that looks like the image below:
There are two classes (YES and NO) and the prediction accuracy is overall quite good. Nevertheless, what irritates me is that by looking at the confidence levels for the two classes, I would expect a different prediction for some examples. In the example above the confidence for a YES prediction is 39.3%, whereas for NO it is 60.7%. Why does the model then predict (correctly) an outcome of YES?
Any insights on or references to how the confidence levels are calculated in GBTs would be highly appreciated.
which version of RapidMiner are you running?
H2O models (GBT, GLM & LogReg) have the special feature of adapting their own threshold. This is similar to the Optimize Threshold operator but it uses F1-Measure to optimize its own threshold.
We encountered this - for RM users - slightly irritating behavior for LogReg 1 or 2 versions ago. I thought we fixed this for all h2o models.
Ok, I have now updated to 7.6001 and can confirm that I still get GBT predictions that are non-intuitive with respect to the confidence values. This is not critical in my specific current case, but it may throw off somebody who is using the confidence levels as thresholds. Interestingly, the values do differ significantly compared to the ones I have seen in RM 7.5003 with the identical input and model.