Many of the commercial tools have sensitivity analysis features for a variety of predictive model types. For instance IBM SPSS Modeler has it for all predictive models. The purpose is to identify relative contribution of each independent variable. Usually applies to machine learning techniques. Is there an ad on that does this?
WHEN if ever we will have sensitivity analysis capabilities in RapidMiner. SA is the only way yo shed some light into black-box of machine learning.
You can e.g. use the Loop Attributes operator and perform a X-Validation on each attribute.
There are also some operators that use statistical methods to estimate the predicitve power of a feature. Search for "Weight by" in the operator tree to find all methods.
Your suggestion "...use the Loop Attributes operator and perform a X-Validation.." is intriguing... Have you ever done it. I tried to set it up with Iris data set. I could not make it work. I would appreciate it if you can help me set it up... You can either post it here or send me an email at email@example.com. What would help more than anything else is a process that uses iris to do just that. THANK YOU!
I can make it work if you tell me more clearly your goal.
I'm familiar with SPSS and SAS tools, so you can describe by relating to their features.
I'm afraid you will end up with a simple correlation score between all features.
Some better ideas will be small trees of small feature subsets.
For example, if you have an XOR relation, you will have 0 correlation.
I.e. y = a XOR b, then correlation(a,y) = 0.
If you measure accuracy(y=tree(a,b)) you get better results.
So my recommendation is, go with feature subset selection, and measure performance of all subsets.
I can make this for you if this is what you want.
But maybe you want something completely different.
What I am trying to do is called Sensitivity Analysis in SPSS and SAS. It applies to machine learning techniques where the contribution of each independent variable is assesses (and eventually rank ordered). The mechanics of is is as followed. Independent variable. one at a time, excluded from the input variable list and the accuracy of the classifier for each these models are measured. Based on the degradation of prediction accuracy corresponding to exclusion of each variable, its contribution to the classifier is judged. Once this is done for each variable, their rank ordered (relative) importance values are tabulated.
This is great! Thank you. Can you please answer a few questions regarding your process:
1. What are the exact meaning of p, d, c, a, n
2. Why did you not stop the process at Forward selection node? Wouldn't you get everything you need at that point?
3. Can we do the same thing with Loop Attribute Subset? (as was suggested by the moderator in response to my initial inquiry).
p = performance
d = deviation of performance
c = run time in miliseconds
a = iteration counter (not very informative)
n = number of attributes
Look at the log operator, then you can see how this gets created.
Q3, yes you can do this with loop attributes
Q2, because looking at single attributes is not informative.
You must always look at small attribute subsets.
Like I said before, its possible for an attribute to have 0 correlation and still be required for accurate classification.
E.g. some non linear response effect, where averaged over the entire population there is no effect, only after including another attribute, which splits on specific group, where there is an effect, you see increased performance.
Relations like this are extremely common, especially for data which has a large number attributes.
If you want something that is very fast, take a look at:
Performance (CFS) (Weka)
Calculates a performance measure based on the Correlation (filter evaluation).
CFS attribute subset evaluator. For more information see: Hall, M. A. (1998). Correlation-based Feature Subset Selection for Machine Learning. Thesis submitted in partial fulfilment of the requirements of the degree of Doctor of Philosophy at the University of Waikato.
This operator creates a filter based performance measure for a feature subset. It evaluates the worth of a subset of attributes by considering the individual predictive ability of each feature along with the degree of redundancy between them. Subsets of features that are highly correlated with the class while having low intercorrelation are preferred.
This operator can be applied on both numerical and nominal data sets.