Confidence interval calculation on performance
I see that for many classification performance metric, RapidMiner provides an estimation which I interpret as an interval of confidence around the value provided. However I have failed to find the exact calculation that is performed, in particular for the kappa value.
To be clear, my question seems similar to the one of @taghaddo from last December (see post there: https://community.rapidminer.com/discussion/54694/error-range-of-classifier), which as far as I can tell, hadn't been answered to. Would anyone be able to clarify this point? Maybe by posting a snippet of the actual source code for that calculation as it seems to be sometimes done (e.g. for the kappa calculation there https://community.rapidminer.com/discussion/54909/regarding-kappa-value-in-cross-validation)?