Standard deviation VS Naive Bayes prediction
djafarsidik
Member Posts: 7 Learner I
Hi,
Could someone help me and explain is there any correlation between standard deviation value of an attribute with naive bayes prediction ?, because I have some datasets which have same type of data, but one of it has one attribute with very low standard deviation, and the accuracy result is very poor when using naive bayes, but if I tried to use decission tree then the result is more better than naive bayes.
Could someone help me and explain is there any correlation between standard deviation value of an attribute with naive bayes prediction ?, because I have some datasets which have same type of data, but one of it has one attribute with very low standard deviation, and the accuracy result is very poor when using naive bayes, but if I tried to use decission tree then the result is more better than naive bayes.
Also what can make result of ensemble method such as bagging or stacking is worst than base classifier ?
Sorry if this is kind of stupid question, I am still learning.
Thank you very much for the respond.
Tagged:
0
Best Answer

Telcontar120 Moderator, RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 1,635 UnicornThese are difficult questions to answer in the abstract. Overall, what matters most to Naive Bayes isn't the standard deviation of an attribute as a whole, but rather differences between the means and standard deviations of numerical attributes between classes, based on an assumed normal distribution. But for any given dataset, you would have to look at the specific distributions to determine whether this assumption is reasonable or not, and what impact it might have on the resulting predictions.
Likewise, it is impossible to say in general why a given algorithm like Decision Tree would outperform Naive Bayes, since that will be based on the specific qualities of each dataset. Similarly, it is not possible to evaluate the hypothetical performance of ensemble methods vs other methods without reference to the specific hyperparameters chosen for a particular dataset.
Sorry these answers are not clearer, but this is a consequence of the "no free lunch" theorem, which basically says there is no univerally best algorithm for solving all supervised machine learning problems.
If you are interested in learning more I encourage you to work through the tutorials on the various algorithms in question and also the lessons on RapidMiner Academy, as well as doing some more general research on machine learning algorithms via the web.4