It looks like you're new here. If you want to get involved, click one of these buttons!
Scaling them before applying SVM is very important. (Sarle 1997, Part 2 of NeuralNetworks FAQ) explains why we scale data while using Neural Networks, and mostof considerations also apply to SVM.The main advantage is to avoid attributes in greater numeric ranges dominatethose in smaller numeric ranges. Another advantage is to avoid numerical diffcultiesduring the calculation. Because kernel values usually depend on the inner products offeature vectors, e.g. the linear kernel and the polynomial kernel, large attribute valuesmight cause numerical problems. We recommend linearly scaling each attribute tothe range [-1; +1] or [0; 1].Of course we have to use the same method to scale testing data before testing. Forexample, suppose that we scaled the
rst attribute of training data from [-10; +10]to [-1; +1]. If the
rst attribute of testing data is lying in the range [-11; +8], wemust scale the testing data to [-1:1; +0:8].
Username wrote:Is there something like a "NormalizationModel" to get the same normalized values for trainig and test examples?
Tobias Malbrecht wrote:Hi,This should be possible by applying the operator [tt]Normalization[/tt] with enabling the parameter [tt]return_preprocessing_model[/tt] on the training data. This means, a preprocessing model is generated by normalizing the training which you can subsequently apply on the test data as well. The normalization is then done on the test data via the same transformation as applied on the training data. Therefore you have to apply this model using the [tt]ModelApplier[/tt].Regards,Tobias
RapidMiner Auto Model
RapidMiner Turbo Prep
Training Classes & Certification
ML Algorithm Reference
Educational License Program