# hyperplane equation

I am trying to solve a simple toy classification problem ( a linear one) using the "linear regression", perceptron and svm (libsvm). Then read the parameters and try to find out the hyperplane equation

and I got confused with the information that I got in the "text view" where the models are written. The bias term (b) of the model do not seem to be always written using same notation

I am trying to interpret as w1*attr1+w2*attr2+ ...+b.

In a 2D problem the hyperplane equation should then come from w1*attr1+w2*attr3+b=0

The problem I was solving was

((attr1,attr2), label) -----> ( (-1,1),1); ((1,-1),1),((1,1),1),(-1,-1),-1).

with linear regression I got the following equation 0.25*attr1+0.25*attr2+0.25

with perceptron I got : intercept : -0.25 w(attr1)=0.25, w(attr2)=0.25

with svm I got : bias (offset): -1.0 w(attr1)=0.5 w(attr2)=0.5.

The question is the “bias” or “intercept” are according with the output of linear regression. And in particular in these case the solution found seems not solve the problem. What I am doing wrong?

AMT

and I got confused with the information that I got in the "text view" where the models are written. The bias term (b) of the model do not seem to be always written using same notation

I am trying to interpret as w1*attr1+w2*attr2+ ...+b.

In a 2D problem the hyperplane equation should then come from w1*attr1+w2*attr3+b=0

The problem I was solving was

((attr1,attr2), label) -----> ( (-1,1),1); ((1,-1),1),((1,1),1),(-1,-1),-1).

with linear regression I got the following equation 0.25*attr1+0.25*attr2+0.25

with perceptron I got : intercept : -0.25 w(attr1)=0.25, w(attr2)=0.25

with svm I got : bias (offset): -1.0 w(attr1)=0.5 w(attr2)=0.5.

The question is the “bias” or “intercept” are according with the output of linear regression. And in particular in these case the solution found seems not solve the problem. What I am doing wrong?

AMT

0

## Answers

537GuruI think you need to use more data points.

There are many more functions that can fit these 4 data points perfectly,

other then the function f(x,y) = positive(x) or positive(y).

Also, it is important use use labels like a, b, instead of 1, -1, else its not a classification problem, but a regression problem.

You seem to want to find a function that is something like

y(x) = x - 1.

and then f(x,y) = y(x) > 0?