The Altair Community is migrating to a new platform to provide a better experience for you. The RapidMiner Community will merge with the Altair Community at the same time. In preparation for the migration, both communities are on read-only mode from July 15th - July 24th, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here.
Options

Standard deviation VS Naive Bayes prediction

djafarsidikdjafarsidik Member Posts: 7 Learner I
edited September 2020 in Help
Hi,
Could someone help me and explain is there any correlation between standard deviation value of an attribute with naive bayes prediction ?, because I have some datasets which have same type of data, but one of it has one attribute with very low standard deviation, and the accuracy result is very poor when using naive bayes, but if I tried to use decission tree then the result is more better than naive bayes.

Also what can make result of ensemble method such as bagging or stacking is worst than base classifier ?

Sorry if this is kind of stupid question, I am still learning.
Thank you very much for the respond.
Tagged:

Best Answer

  • Options
    Telcontar120Telcontar120 Moderator, RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 1,635 Unicorn
    Solution Accepted
    These are difficult questions to answer in the abstract. Overall, what matters most to Naive Bayes isn't the standard deviation of an attribute as a whole, but rather differences between the means and standard deviations of numerical attributes between classes, based on an assumed normal distribution.  But for any given dataset, you would have to look at the specific distributions to determine whether this assumption is reasonable or not, and what impact it might have on the resulting predictions. 
    Likewise, it is impossible to say in general why a given algorithm like Decision Tree would outperform Naive Bayes, since that will be based on the specific qualities of each dataset.  Similarly, it is not possible to evaluate the hypothetical performance of ensemble methods vs other methods without reference to the specific hyperparameters chosen for a particular dataset.
    Sorry these answers are not clearer, but this is a consequence of the "no free lunch" theorem, which basically says there is no univerally best algorithm for solving all supervised machine learning problems.
    If you are interested in learning more I encourage you to work through the tutorials on the various algorithms in question and also the lessons on RapidMiner Academy, as well as doing some more general research on machine learning algorithms via the web.
    Brian T.
    Lindon Ventures 
    Data Science Consulting from Certified RapidMiner Experts
Sign In or Register to comment.