Why does log-Transformation give 3% better accuracy with LibSVM?

Fred12Fred12 Member Posts: 344 Unicorn
edited December 2019 in Help

hi,
I tried a log10-Transformation no my right-skewed dataset, and trained / tested it again with a LibSVM. The results were staggering me, as it is a quite  difficult dataset. But the results were 2.5 -3 % better than with not transformed dataset (from 84-85 to 87.6 % better performance..). I also standardized my datasets prior...

how can that be? I mean SVM does not make any distribution assumptions like a GLM or does it?

 

it would just correspond to a different Kernel function right? I used RBF-kernel, then it would be a RBF-Kernel with ||log(x)-log(x*)|| in the numerator of the rbf kernel function, right?

 

Tagged:

Best Answer

  • marcin_blachnikmarcin_blachnik Member Posts: 61 Guru
    Solution Accepted

    Well

    The SVM don't make any assumptions but I guess you use RBF kernel. The RBF kernel has fixed Gamma. When the attributes are squed the Gamma has different influence on low values of the data than on hi values, when you log it you make the gamma to have equal influence in the entire space.
    Good preprocessing is very important for SVM model.

    Best

    Maricn

Answers

  • Thomas_OttThomas_Ott RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 1,761 Unicorn

    Great question! I don't know the answer but I will do an incantation for @IngoMierswa and see if he can answer that!

  • Fred12Fred12 Member Posts: 344 Unicorn

    hi,

     

    did you get any answers yet?

  • Pekka_JounelaPekka_Jounela Member, University Professor Posts: 4 University Professor

    Hi, I guess thats because taking a log10 transformation reduces variance. 

     

    Bests

    Pekka

  • Fred12Fred12 Member Posts: 344 Unicorn

    thanks, that sounds like a nice and logic explanation :)

Sign In or Register to comment.