# Prediction Column out of Binary Machine Learning Classification Problem

For example, in case of Logistic Regression, we can get Coefficients that can be multiplied by the predictors to get the final output in the form of an attribute in CSV file or an image. Please let me know if it is scientifically correct to get the weights/rules out of trained SVM. ANN, KNN, NB models and multiply each predictor with each weight/rule and get the sum for all predictors. I mean (predictor 1* its weight + predictor 2* its weight + predictor 3* its weigh + ........)

Tagged:

0

## Answers

443UnicornBoosting.26Contributor II2,224RM Data Scientistyour approach of coefficient*value only works for linear models. The strenght of most machine learning models is ,that they are non-linear. thats the cool part.Breaking down non-linear, multi-variate methods into single factors is 'tricky' to 'impossible'.

Never the less, have a look at the WEI ports of the operators and at operators like Tree to Rules (or so?). They may help.

Cheers,

Martin

Dortmund, Germany

26Contributor II1,006UnicornSVM is a one of the linear models , but it can work with non linear functions using kernel trick. Non linear algorithms have their own way of working, for example a decision tree works based on split criterion and a neural network work based on hidden unit activations.

So basically every class of algorithms have their own way of working

For your initial question, yes its scientifically correct to get feature weights from an algorithm, as the weights are calculated based on proven methods. But it is not always correct to multiply the weight with feature, it is only correct for a class of linear models (GLM) that are based on linear equations

Varun

https://www.varunmandalapu.com/