Nominal Attributes in NeuralNet

steffensteffen Member Posts: 347 Maven
edited August 2019 in Help

I recently tried the NeuralNet implementation provided by RapidMiner and wondered why it is possible to pass nominal values to the learner. Searching the code I found:

for (Attribute attribute : example.getAttributes()) {
          result[counter][a++] = example.getValue(attribute);
which means the algorithm uses the indices instead of the nominal values. I thought about it and was not able to found a clear argument why a neuron cannot handle/process values with nominal background. But I do not feel comfortable with this. Is there any theoretical justification ?

Your implementation will cause different results when the rows of the corresponding data are read in different order. I performed a small experiment using the tic-tac-toe data from UCI: I just tagged the data with an id, permutated and saved it in two different ways. Then I checked that the value of the label in the first line is "negative" so that the positive/negative classes are recognized correctly. Then I performed the follwing process: (using cvs,last update at tuesday)

<?xml version="1.0" encoding="UTF-8"?>
<process version="4.2">

  <operator name="Root" class="Process" expanded="yes">
      <operator name="IteratingOperatorChain" class="IteratingOperatorChain" expanded="yes">
          <parameter key="iterations" value="2"/>
          <operator name="CSVExampleSource" class="CSVExampleSource">
              <parameter key="filename" value="tictactoe%{a}.csv"/>
              <parameter key="id_column" value="1"/>
              <parameter key="label_column" value="-1"/>
          <operator name="Sorting" class="Sorting">
              <parameter key="attribute_name" value="id"/>
              <parameter key="sorting_direction" value="decreasing"/>
          <operator name="NaiveBayes" class="NaiveBayes" activated="no">
              <parameter key="keep_example_set" value="true"/>
          <operator name="NeuralNet" class="NeuralNet">
              <list key="hidden_layer_types">
              <parameter key="keep_example_set" value="true"/>
          <operator name="ModelApplier" class="ModelApplier">
              <list key="application_parameters">
          <operator name="BinominalClassificationPerformance" class="BinominalClassificationPerformance">
              <parameter key="lift" value="true"/>
          <operator name="PerformanceWriter" class="PerformanceWriter">
              <parameter key="performance_file" value="nnres%{a}"/>

which causes in the same results using NaiveBayes (as expected) and different results with NeuralNet. The results,data etc. are packed in the attachment.

I know that you are currently revisiting the "kernel" of ExampleSets. Maybe this is an inspiration for a testcase ;).

thanks in advance


I rerun the process using Nominal2Binomial before. The results produced by Neural Net were the same (as exspected)...nearly. 34 Elements have been shifted to another cells (regarding the contingency table). Strange, strange... Beside this: I thought about saying something like:"the NN-operator should be restricted to binomial and non-nominal attributes only", but: I guess even binomial results will produce different mappings if they have not been transformed to two new binary attributes. This preprocessing-step can be included in the NN-operator, but this will send the performance of this generally slow algorithmn to earth's core :(.

[attachment deleted by admin]
Sign In or Register to comment.