Due to recent updates, all users are required to create an Altair One account to login to the RapidMiner community. Click the Register button to create your account using the same email that you have previously used to login to the RapidMiner community. This will ensure that any previously created content will be synced to your Altair One account. Once you login, you will be asked to provide a username that identifies you to other Community users. Email us at Community with questions.

"Neural Net and Sigmoid Function"

chaosbringerchaosbringer Member Posts: 21 Contributor II
edited May 2019 in Help
Hi,
i am training a neural net. My goal is to predict a function with its values within 0 and 1.
The labeled attribute of my example set has therefore its values within 0 and 1, too.

Now, the neural net in rapidminer uses a sigmoid function, with its values between -1 and 1. Thus, the inputdata must get normalized.
Fortunately rapidminer does that for me too :-D
But i apply the resulting model, it predicts for some examples values less than 0. So, how can i prevent the net from returning values less than 0?
Obviously i may set any value less than 0 to 0 (actually i am not sure how to accomblish this in rapid miner). But would it not be reasonable to include this modification into the learning algorithm?

Thank you very much

Answers

  • dan_agapedan_agape Member Posts: 106 Maven
    My goal is to predict a function with its values within 0 and 1
    From what you write it seems that your output/label attribute is bi-valued, so you could regard your problem as a classification rather than a regression here, despite the fact that the output is numeric. Neural nets can do both, classification and regression, and the corresponding operator decides which one to do depending on the type of the output, whether it is nominal or numeric, respectively.

    So one of the simplest solutions would be to use the operator Numerical to Polynominal to convert your output/label attribute only, prior to the Neural Net operator. After conversion 0 and 1 will be regarded as (non quantitative) labels rather than numbers, and the neural net will have to stick to these as predictions, so that you do not get negative values any more.

    Less likely, if instead of the set {0, 1} you actually meant the interval [0,1] as values for your function, this is a different matter, and you would need in this case to replace the predicted negative values by 0 (and the values over 1 by the latter) via your process, outside the neural network operator in any case.

    As a remark regarding theory versus practice, in theory the final calculated values of an output node of a neural net are rescaled to the value range of the numerical label, so that one does not get out-of-bounds values as you seemed to have got in practice.

    Dan

Sign In or Register to comment.