Result of Performance vector??
Hello, i am pretty new in DM and Rapidminer. I looking for some help.
Can somebody help me how to explain the result of Performance vector??
0
Best Answer

varunm1 Posts: 1,006 UnicornHello @rresetar
Based on the image you posted. The overall accuracy of your algorithm is 74.1 %.
The table shown is a confusion matrix, this table helps understand the correct and incorrect predictions made by your algorithm.
For example, The first column is related to the original label "neutral" in your data set. There are a total of (638+66+16 = 720) samples that are labeled neutral in your original data set. Now we compare that with all rows in the first column to see how they are classified by the algorithm. From this column, it is evident that the algorithm classified 638 samples as neutral (correct), 66 as positive (Wrong) and 16 as negative (wrong) that belongs to 720 samples of neutral. Now class recall for neutral class is 638/(638+66+16) *100. This means (Total number of samples predicted correctly in a specific class)/(Total samples in data that belong to that class). This is similar to the other two rows Class recall for positive is 75/(75+111) and negative is 28/(63+3+28).
If you look at the other end there is a column for class precision, this is calculated based on (number samples of a class that are predicted correctly/ total number of samples predicted as that class). For example, let's take neutral class, the class precision for neutral class is the (638/(638+111+63)), this means that 638 predictions belong to neutral class and the rest 111 and 63 that were predicted as neutral class belongs to positive and negative classes, similarly for other classes as well.
You can explain, the accuracy, individual class precision, and recall.
To understand indepth you can look at this wiki
https://en.wikipedia.org/wiki/Precision_and_recall
Hope this helps. Please inform if you need more information.8
Answers
Think you have a dataset with 100 samples (rows) that belongs to two classes "apples" and "oranges". Here 50 samples of your data belong to apples and another 50 belongs to oranges. You trained and tested a machine learning algorithm.
This algorithm predicted 25 samples as apples, in these 25 apple predictions 12 are really apple (based on your label) and 13 are predicted as apple actually belongs to orange in the dataset. Similarly, there are 75 predictions as oranges, in these 75 predictions 38 predictions belong to apple class but they were predicted as oranges and 37 predicted as an orange belongs to orange in the data.
Now Recall of a class is the (the number of samples predicted as apples by an algorithm that really belong to apple class) divided by (the total number of apple samples in an algorithm) which is 12/50.
Now class precision for apples is (the number of samples predicted as apples by an algorithm that really belong to apple class) divided by (total number of samples predicted as apple) which is 12/25.
This is similar to the orange class as well.
I am also providing you with a video from rapidminer academy that helps understand these.
https://academy.rapidminer.com/learn/video/introductiontoperformancemeasurement
Varun
https://www.varunmandalapu.com/