**🎉 🎉 RAPIDMINER 9.10 IS OUT!!! 🎉🎉**

### Download the latest version helping analytics teams accelerate time-to-value for streaming and IIOT use cases.

## CLICK HERE TO DOWNLOAD

# "Confidence values"

Hi friends,

I'm using rapidminer to make text classification with svm(libsvm), k-nn and naive bayes algorithms. So, when i get the results of my test data, i'm not sure about how each one calculates the confidence values of each instance on each class. Can anyone help me? I need this information to my article.

Thanks in advance.

I'm using rapidminer to make text classification with svm(libsvm), k-nn and naive bayes algorithms. So, when i get the results of my test data, i'm not sure about how each one calculates the confidence values of each instance on each class. Can anyone help me? I need this information to my article.

Thanks in advance.

Tagged:

0

## Answers

1,751RM Founderthis is different for each of those algorithms:

naive bayes: the confidence is directly the calculated probability delivered by the algorithm (actually, this is one of the rare cases where the confidence IS a real probability)

k-nn: the confidence is the number of the k neighbors with the predicted class divided by k (the single values are weighted by distace in case of weighted predictions)

svm (I am not so sure about the LibSVM which brings another calculation in the multiclass case): for binomial classes, a good estimation of the probability for the positive class which is also used by RapidMiner is 1 / (1 + exp(-function_value))) where function_value is the SVM prediction

Hope that helps,

Ingo

9Contributor II9Contributor IIThanks

1,751RM FounderThe same applies for text classification, the confidence of a class value states how certain the model is that a document belongs to this class.

Cheers,

Ingo

9Contributor III need to clarify some aspects of my project:

I'm using three different methods to classify approximately 3000 documents in 11 categories. The methods are: k-NN, Naive Bayes and SVM (libsvm linear Kernel C-SVC). After submitting the documents for each of the testing methods generates an output value with a confidence (0-1) of the document for each category and the category chosen is having the biggest confidence.

What i´m doing is to sum the confidences of the document on each category on each 3 models and choose the label with the highest value, i guess this is called bagging, right?. Well, the fact is: my accuracy was improved about 2%. I´m yet not sure about how this confidence values are generated and normalized by Rapidminer on each model to support my conclusions. Do I have to normalize the values of each method to work together or i can consider them normalized and my result makes sense?

Many thanks in advance!

2Contributor IIngo, is there any documentation available for helping understand each algorithm's definition of confidence? Thanks!

Jing

3,055RM Data ScientistDear Jing,

first of all: welcome to the community. There is no documentation on how our 250+ learners are calculating confidence. Most of the things are either readable in text books or in our code. Is there any operator in specific where we can help you?

~Martin

Dortmund, Germany

1Contributor IHere just look at the sampel

Copy from Help:

And this ist how the confidence is calculated:conf(yes) = 0.094/(0.094+0.082) = 0.534conf(no) = 0.082/(0.094+0.082) = 0,465Without round-off error you get: