The Altair Community is migrating to a new platform to provide a better experience for you. The RapidMiner Community will merge with the Altair Community at the same time. In preparation for the migration, both communities are on read-only mode from July 15th - July 24th, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here.


farquadfarquad Member Posts: 1 Contributor I
edited November 2018 in Help
Using different kernal functions (linear, polynomial, rbf and sigmoid) different set of support vectors have been extracted, why and how???

with the change of parameter for the kernals different set of support vectors, why and how???

could anyone please explain me this, i would be thankful to you.

thank you very much...


  • Options
    IngoRMIngoRM Administrator, Moderator, Employee, RapidMiner Certified Analyst, RapidMiner Certified Expert, Community Manager, RMResearcher, Member, University Professor Posts: 1,751 RM Founder

    phew, explaining this would be equivalent to explaining the whole idea of kernel based learning. Instead of replicating everything which was said and written at least a hundred times, I will give you some hints:

    - let's say the data cannot be divided by a linear hyperplane in its input space but by a polynomial. Then the linear hyperplane will cause a lot of errors and hence support vectors
    - the same applies for other kernel functions or inappropriate kernel parameters

    If you have any difficulties in understanding those two hints I would suggest to learn much more about the way support vector machines work, e.g. at


Sign In or Register to comment.