Hi, I don't think that the LibSVM will normalize the values internally. As far as I know, there are no reasons to do that. But if you want to go for sure: Run the LibSVM one time without previous normalization and one time with it. By comparing the model you should be able to find differences if there are one. If you try, I would appreciate if you could report the results here, since I'm a little bit curious, too
My program isn't finished yet, so I can't tell you any results soon but I found this section in the LibSVM tutorial:
Scaling them before applying SVM is very important. (Sarle 1997, Part 2 of Neural Networks FAQ) explains why we scale data while using Neural Networks, and most of considerations also apply to SVM. The main advantage is to avoid attributes in greater numeric ranges dominate those in smaller numeric ranges. Another advantage is to avoid numerical diffculties during the calculation. Because kernel values usually depend on the inner products of feature vectors, e.g. the linear kernel and the polynomial kernel, large attribute values might cause numerical problems. We recommend linearly scaling each attribute to the range [-1; +1] or [0; 1]. Of course we have to use the same method to scale testing data before testing. For example, suppose that we scaled the rst attribute of training data from [-10; +10] to [-1; +1]. If the rst attribute of testing data is lying in the range [-11; +8], we must scale the testing data to [-1:1; +0:8].
Is there something like a "NormalizationModel" to get the same normalized values for trainig and test examples?
This should be possible by applying the operator [tt]Normalization[/tt] with enabling the parameter [tt]return_preprocessing_model[/tt] on the training data. This means, a preprocessing model is generated by normalizing the training which you can subsequently apply on the test data as well. The normalization is then done on the test data via the same transformation as applied on the training data. Therefore you have to apply this model using the [tt]ModelApplier[/tt].
You told about using Normalization operator. What is that? I'm a beginner. Is that available in Matlab? If so, how to access it?
Tobias Malbrecht wrote:
Hi,
This should be possible by applying the operator [tt]Normalization[/tt] with enabling the parameter [tt]return_preprocessing_model[/tt] on the training data. This means, a preprocessing model is generated by normalizing the training which you can subsequently apply on the test data as well. The normalization is then done on the test data via the same transformation as applied on the training data. Therefore you have to apply this model using the [tt]ModelApplier[/tt].
Answers
I don't think that the LibSVM will normalize the values internally. As far as I know, there are no reasons to do that. But if you want to go for sure: Run the LibSVM one time without previous normalization and one time with it. By comparing the model you should be able to find differences if there are one.
If you try, I would appreciate if you could report the results here, since I'm a little bit curious, too
Greetings,
Sebastian
So, I will use normalization before applying the LibSVM learner.
Is there something like a "NormalizationModel" to get the same normalized values for trainig and test examples?
Regards,
Tobias