🎉 🎉   RAPIDMINER 9.5 BETA IS OUT!!!   🎉 🎉

GRAB THE HOTTEST NEW BETA OF RAPIDMINER STUDIO, SERVER, AND RADOOP. LET US KNOW WHAT YOU THINK!

CLICK HERE TO DOWNLOAD

🦉 🎤   RapidMiner Wisdom 2020 - CALL FOR SPEAKERS   🦉 🎤

We are inviting all community members to submit proposals to speak at Wisdom 2020 in Boston.


Whether it's a cool RapidMiner trick or a use case implementation, we want to see what you have.
Form link is below and deadline for submissions is November 15. See you in Boston!

CLICK HERE TO GO TO ENTRY FORM

Help with correct understanding results of classification

Serek91Serek91 Member Posts: 22 Contributor II
edited October 9 in Help
Hi, I have such table with results of classifications:



I have 4 algorithms. Classification was made for 16 different training sets:
- all => all 15 predictors were used
- 1-15 => each set contains 14 predictors and in each set one different type of predictor was removed

Example of set is in attachment.

Type of excluded predictor | column name in csv
1 - characters_number
2 - sentences_number
3 - words_number
4 - average_sentence_length
5 - average_sentence_words_number
6 - ratio_unique_words
7 - average_word_length
8 - ratio_word_length_[1-16]
9 - ratio_special_characters
10 - ratio_numbers
11 - ratio_punctuation_characters
12 - most_used_word_[1-4]
13 - ratio_letter_[a-z]
14 - ratio_questions;
15 - ratio_exclamations;

I have to samehow conclude why results for 1-15 for each algorithm and each set are better/worse than results in column "ALL".

But I don't have any idea why. I know that in most cases, when difference between column ALL and column [1-15] is very small (like < 1%) it is just a luck and randomness. But in cases when difference is higher, probably it is caused by something.

The most important thing - I don't know why for k-NN algorithm results are the same for columns 9-15...

And good will be to know, why Naive Bayes is the best (54%) and k-NN is a bad algorithm for this task (20%).

Can someone help me with that?










Tagged:

Best Answer

Answers

  • Serek91Serek91 Member Posts: 22 Contributor II
    Ok, thanks. I added normalization to the k-NN and now I have better results (~46%).

    Normalization is not needed in rest of algorithms (Naive Bayes, Decision Tree)? I don't see any difference with and without it.
  • varunm1varunm1 Moderator, Member Posts: 840   Unicorn
    I don't say it's not needed, but for KNN you will definitely find a difference with normalization. The reason is the distance calculation methods used in KNN. KNN mainly relies on surrounding data samples for prediction. There is a beautiful visual example in the StackOverflow post below.

    https://stats.stackexchange.com/questions/287425/why-do-you-need-to-scale-data-in-knn

    From my experience, there won't be much difference (normalization) in the decision tree as they calculate the impurity index for each attribute and branch down. 
    Regards,
    Varun
    Rapidminer Wisdom 2020 (User Track): Call for proposals 

    https://www.varunmandalapu.com/
    Serek91
  • Serek91Serek91 Member Posts: 22 Contributor II
    Ok, thanks.

    Results for k-NN are now a way better. Results for Decision Tree are a bit better, but difference is not significant. I will try a bit more to improve it.





  • Serek91Serek91 Member Posts: 22 Contributor II
    Hi, I have next question:
    Decision Tree - result in columns ALL and 12 are the same. Column 12 has only string values (words), not numerical. Can Decision Tree use predictors with text values? It seems that it can't.
  • varunm1varunm1 Moderator, Member Posts: 840   Unicorn
    From my understanding, the text data is treated as categorical (nominal) in this case.

    Regards,
    Varun
    Rapidminer Wisdom 2020 (User Track): Call for proposals 

    https://www.varunmandalapu.com/
  • Serek91Serek91 Member Posts: 22 Contributor II
    edited October 17
    According to docs:
    This Operator can process ExampleSets containing both nominal and numerical Attributes.

    So it should have some impact on final result. But result is still the same. No matter if predictor is included or not.


  • varunm1varunm1 Moderator, Member Posts: 840   Unicorn
    edited October 17
    You should see the two models and see if it has that feature/attribute in the tree. May be that attribute got pruned
    Regards,
    Varun
    Rapidminer Wisdom 2020 (User Track): Call for proposals 

    https://www.varunmandalapu.com/
  • Serek91Serek91 Member Posts: 22 Contributor II
    edited October 17
    I made prediction only using this one parameter, and I got:





  • varunm1varunm1 Moderator, Member Posts: 840   Unicorn
    Makes sense, its zero accuracy cause it cannot predict with that one, it just randomly labeled predictions. If you want to predict from text, you should use some techniques like tokenization
    Regards,
    Varun
    Rapidminer Wisdom 2020 (User Track): Call for proposals 

    https://www.varunmandalapu.com/
    Serek91
  • varunm1varunm1 Moderator, Member Posts: 840   Unicorn
    What is 792246? Is it a column name? I think some issue in the process structure. Not sure unless I see data and process. Based on posted picture I am bit confused. Only reasons I can think is everything go pruned due to no added value in tree or some issue in process input
    Regards,
    Varun
    Rapidminer Wisdom 2020 (User Track): Call for proposals 

    https://www.varunmandalapu.com/
  • Serek91Serek91 Member Posts: 22 Contributor II
    edited October 17
    Ehhh... so it will be hard to do it now... I don't have time for it...

    Thanks anyway.

    EDIT:
    What is 792246? Is it a column name? I think some issue in the process structure. Not sure unless I see data and process. Based on posted picture I am bit confused. Only reasons I can think is everything go pruned due to no added value in tree or some issue in process input

    I added wrong image^^

    It should be this one:



    Tghadially
Sign In or Register to comment.