RapidMiner 9.7 is Now Available
Lots of amazing new improvements including true version control! Learn more about what's new here.
Confidence or Prediction Intervals
When reporting a performance metric (ie AUC) of a model that was trained on a single data set and tested on a hold-out set, what is the proper way to assess its variance? Calculating the confidence intervals of the AUC or the prediction intervals?