# "Neural Networks: Nominal Class"

Hi,

I tried to train a neural network with a nominal class label dataset (iris dataset), but I received the following error:

Greetings,

--

Motaz K. Saad

I tried to train a neural network with a nominal class label dataset (iris dataset), but I received the following error:

How can I train a neural network with a nominal class label dataset?

This learning scheme does not have sufficient capabilities for the given data set: polynominal label not supported

Greetings,

--

Motaz K. Saad

Tagged:

0

## Answers

1,751RM Founderjust add the NeuralNet operator into the operator "Binary2MultiClassLearner" which can be used to transform any binominal learning scheme into one which can be applied on multiple classes. Here is an example: Cheers,

Ingo

42GuruThanks for replying

Hmmm ::), It builds 3 NNs, one for each class values !

Do you recommend using other solutions rather than using "Binary2MultiClassLearner" operator, like replacing each class label value with a numeric value ?

for example, for Iris dataset, I performed the following:

replaced "Iris Setosa" class with 10

replaced "Iris Versicolour" class with 20

replaced "Iris Virginica" class with 30

After training the NN for 1000 training cycles, with learning rate of 0.3, momentum of 0.2, and error epsilon of 0.00. I got the following results

root_mean_squared_error: 1.802 +/- 0.000

squared_error: 3.246 +/- 12.872

When I tried to replace nominal class values with smaller numbers, I got difference results for same training cycles (1000). like the following

replaced "Iris Setosa" class with 0

replaced "Iris Versicolour" class with 1

replaced "Iris Virginica" class with 2

I got the following results

root_mean_squared_error: 0.180 +/- 0.000

squared_error: 0.032 +/- 0.129

What do you think? !

Another question please, What +/- 0.129 in squared_error results stands for ?

Worm Greetings,

--

Motaz K. Saad

294RM Product Managementyouthink?Let me try to point you at how to get a conclusion yourself by mirroring to you what you did:

First of all, you had a classification problem. Then you transformed this to a regression problem by arbitrarily mapping the three classes to real values. Afterwards you compared the (regression!) errors you obtained with two different mappings. You observed that the errors were different. Actually, the errors seem to be scaled as your label values imply.

But what happens if you map the three classes to the values 2, 1, 0? ... or 0.1, 38297159 and 7? Or -4, 328 trillion and pi? Well, the errors will surely be different again. But the example mappings I just mentioned should not be less reasonable than the mappings you tried. The point is: mapping a classification to a regression problem and examining the regression errors will almost not give you any information at all. If you try using a regression learner on a classification problem, you at least have to map the predictions back to the classes and examine the classification errors. Nevertheless, this will still be highly dependent on the mappings you have chosen.

Hence, to cut a long story short: the method Ingo proposed is the adequate way to use neural nets for multi classification problems.

Hope I could clarify things a bit.

Regards,

Tobias

42GuruThanks for your reply.

What +/- 0.129 in squared_error results stands for ?

Thanks in advance,

--

Motaz

294RM Product Managementsorry, I forget to answer that question!

That value is the standard deviation of the error in the folds (assuming you have done a cross validation).

Regards,

Tobias