🎉 🎉 RAPIDMINER 9.10 IS OUT!!! 🎉🎉

Download the latest version helping analytics teams accelerate time-to-value for streaming and IIOT use cases.

CLICK HERE TO DOWNLOAD

How to plot Stability and/or Accuracy versus number of features?

meloamaurymeloamaury Member Posts: 8 Contributor II
edited March 2020 in Help

Hi all,

I would like to plot the Stability of a feature selection operator as a function of the number of features (I would like to reproduce Fig. 6 of the attached .pdf, which I believe is useful for the community). For instance, I can use the "Feature Selection Stability Validation" operator that comes with the Feature Selection Extension. Inside this operator, I could use any other feature selection operator, e.g., "MRMR-FS" or "SVM-RFE". Then I would like to plot the stability of the feature selection against the number of features. I believe, this would give me a better feeling for the number of features to keep for further processing and modelling.

The same idea could be used to plot any performance metric, or runtime, or etc, against the number of features, a sort of "Learning curve" but instead of the number of examples, we use the number of features.

 

I hope the question is clear enough and I thank you all for your input.

Merci,

Amaury

Tagged:

Best Answer

  • IngoRMIngoRM Administrator, Moderator, Employee, RapidMiner Certified Analyst, RapidMiner Certified Expert, Community Manager, RMResearcher, Member, University Professor Posts: 1,751  RM Founder
    Solution Accepted

    Hi Amaury,

     

    In there you have use the Sonar data set and NB classifier. For some basic tests, I see that the results for the Pareto Front will depend on which classifier you used inside the Validation operator.

     

    That is correct.  I think this is actually something positive since the feature weighting / importance and the question if this feature should be used or not is then a good fit to the model itself.  Which typically leads to better accuracies.  This is called "wrapper approach" by the way.  If you would filter attributes out without taking the specific model into account, we call this "filter approach".  the wrapper approach in general delivers better results but needs longer runtimes for model building and validation.

     

    My problem consists of around 800 examples and 2000 attributes. I have built a process where I use a "Select Subprocess" and inside of it I have different  "Optimize Grid" operators containing different classifiers (e.g, LogReg, RandomForest, SVM etc). After this long run, I compare the ROC's for the different classifiers obtained with the the best set of parameters found by the "Optimize Grid" operators.

     

    That makes sense.  You could in theory wrap the whole model building and validation process into the MO feature selection but this might run for a long time.  An alternative is to optimize the model selection and parameter optimization on all features beforehand and then only use the best model so far inside the MO feature selection.  Or you could first filter some features out (filter approach), then optimize the model / parameters, and then run the MO FS.  There is really no right or wrong here in my opinion.  I personally use an iterative approach most of the times.  Filter some features out.  Find some good model candidates.  Optimize parameters a little bit.  Run a feature selection.  Then optimize parameters further and so on...

     

    Hope this helps,

    Ingo

     

    sgenzer

Answers

  • sgenzersgenzer 12Administrator, Moderator, Employee, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM Moderator Posts: 2,959  Community Manager

    hello @meloamaury - I'm tagging @mschmitz and @IngoRM in hopes they may be able to help.


    Scott

     

  • IngoRMIngoRM Administrator, Moderator, Employee, RapidMiner Certified Analyst, RapidMiner Certified Expert, Community Manager, RMResearcher, Member, University Professor Posts: 1,751  RM Founder

    Hi @meloamaury,

     

    Although I am sure you could build such a process, I would like to recommend an alternative approach to you: did you consider to generate such a plot with a multi-objective feature selection?  The big advantage is that you are not running into local extrema while you are adding features but the feature compositions can (and actually will) change for different feature set sizes.  I find this much more useful in most practical applications to be honest. 

     

    If you are interested, this blog post might be for you:

     

    https://rapidminer.com/multi-objective-optimization-feature-selection/

     

    There will also be a webinar on this topic on Jan 24th and will be announced here soon:

     

    https://rapidminer.com/resources/events-webinars/

     

    Cheers,

    Ingo

     

     

    sgenzer
  • sgenzersgenzer 12Administrator, Moderator, Employee, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM Moderator Posts: 2,959  Community Manager

    I knew I tagged the right people. :)  Thanks, @IngoRM.

     

    Scott

     

     

  • mschmitzmschmitz Administrator, Moderator, Employee, RapidMiner Certified Analyst, RapidMiner Certified Expert, University Professor Posts: 3,055  RM Data Scientist

    Hi @meloamaury,

     

    doesn't the FS extension provide performance measures for stability, e.g. Jaccard index? I did this with this extension in my PhD thesis. You basically do a optimize + FS and this performance operator and are done.

     

    Let me know if it works. I am not at my working computer, so I can't provide a process yet.

     

    Best,

    Martin

    - Head of Data Science Services at RapidMiner -
    Dortmund, Germany
  • meloamaurymeloamaury Member Posts: 8 Contributor II

    Hi @mschmitz,

     

    Thanks for your input. Yes, there is the "Performance (MRMR)" operator. However, from what I understood of the .pdf I attached in my message, if we use the "Feature Selection Stability Validation" operator with any FS operator, say "MRMR-FS" inside of it, this gives us already an averaged Stability, but a single value, not a curve as a function of the number of attributes. If you could please send me the process you mention, I might be able to undestand better your suggestion.

    Ingo's suggestion on multi-objective-feature-selection is interesting, although I am still not sure if it will depend strongly on the classifier you use to select the features. 

  • mschmitzmschmitz Administrator, Moderator, Employee, RapidMiner Certified Analyst, RapidMiner Certified Expert, University Professor Posts: 3,055  RM Data Scientist

    Hi @meloamaury.

    attached is an example process. I think this is how Ben, the author thought it should be used..

     

    Best,

    Martin

    <?xml version="1.0" encoding="UTF-8"?><process version="8.0.001">
    <context>
    <input/>
    <output/>
    <macros/>
    </context>
    <operator activated="true" class="process" compatibility="8.0.001" expanded="true" name="Process">
    <process expanded="true">
    <operator activated="true" class="retrieve" compatibility="8.0.001" expanded="true" height="68" name="Retrieve Sonar" width="90" x="112" y="85">
    <parameter key="repository_entry" value="//Samples/data/Sonar"/>
    </operator>
    <operator activated="true" class="concurrency:loop_parameters" compatibility="8.0.001" expanded="true" height="82" name="Loop Parameters" width="90" x="246" y="85">
    <list key="parameters">
    <parameter key="MRMR-FS.k" value="[0.0;10;10;linear]"/>
    </list>
    <process expanded="true">
    <operator activated="true" class="featselext:feature_selection_stability_evaluator" compatibility="1.1.004" expanded="true" height="103" name="Stability" width="90" x="112" y="34">
    <process expanded="true">
    <operator activated="true" class="featselext:mrmr_feature_selection" compatibility="1.1.004" expanded="true" height="82" name="MRMR-FS" width="90" x="45" y="34"/>
    <operator activated="true" class="select_by_weights" compatibility="8.0.001" expanded="true" height="103" name="Select by Weights" width="90" x="179" y="34"/>
    <connect from_port="exampleset" to_op="MRMR-FS" to_port="example set"/>
    <connect from_op="MRMR-FS" from_port="weights" to_op="Select by Weights" to_port="weights"/>
    <connect from_op="MRMR-FS" from_port="example set" to_op="Select by Weights" to_port="example set input"/>
    <connect from_op="Select by Weights" from_port="weights" to_port="weights"/>
    <portSpacing port="source_exampleset" spacing="0"/>
    <portSpacing port="sink_weights" spacing="0"/>
    </process>
    </operator>
    <operator activated="true" class="log" compatibility="8.0.001" expanded="true" height="82" name="Log (2)" width="90" x="313" y="34">
    <list key="log">
    <parameter key="Robustness" value="operator.Stability.value.robustness"/>
    <parameter key="consistency" value="operator.Stability.value.consistency"/>
    <parameter key="k" value="operator.MRMR-FS.parameter.k"/>
    </list>
    </operator>
    <operator activated="false" class="log" compatibility="8.0.001" expanded="true" height="68" name="Log" width="90" x="380" y="289">
    <list key="log"/>
    </operator>
    <connect from_port="input 1" to_op="Stability" to_port="exampleset"/>
    <connect from_op="Stability" from_port="robustness" to_op="Log (2)" to_port="through 1"/>
    <connect from_op="Stability" from_port="exampleset" to_port="output 1"/>
    <portSpacing port="source_input 1" spacing="0"/>
    <portSpacing port="source_input 2" spacing="0"/>
    <portSpacing port="sink_performance" spacing="0"/>
    <portSpacing port="sink_output 1" spacing="0"/>
    <portSpacing port="sink_output 2" spacing="0"/>
    </process>
    </operator>
    <operator activated="false" class="concurrency:optimize_parameters_grid" compatibility="8.0.001" expanded="true" height="124" name="Optimize Parameters (Grid)" width="90" x="246" y="289">
    <list key="parameters"/>
    <process expanded="true">
    <portSpacing port="source_input 1" spacing="0"/>
    <portSpacing port="sink_performance" spacing="0"/>
    <portSpacing port="sink_model" spacing="0"/>
    <portSpacing port="sink_output 1" spacing="0"/>
    </process>
    </operator>
    <connect from_op="Retrieve Sonar" from_port="output" to_op="Loop Parameters" to_port="input 1"/>
    <connect from_op="Loop Parameters" from_port="output 1" to_port="result 1"/>
    <portSpacing port="source_input 1" spacing="0"/>
    <portSpacing port="sink_result 1" spacing="0"/>
    <portSpacing port="sink_result 2" spacing="0"/>
    </process>
    </operator>
    </process>
    - Head of Data Science Services at RapidMiner -
    Dortmund, Germany
  • meloamaurymeloamaury Member Posts: 8 Contributor II

    Hi @IngoRM,

     

    I have not thought about that actually and of course makes sense. I read your blog post and it is very interesting (I regret a lot for not being active in this community from the beginning :)). I have a question regarding your multi-objective feature selection process. 

    In there you have use the Sonar data set and NB classifier. For some basic tests, I see that the results for the Pareto Front will depend on which classifier you used inside the Validation operator. And also, each of these classifiers has their own set of parameters one should tune with a "Optimize Grid" operator.

     

    My problem consists of around 800 examples and 2000 attributes. I have built a process where I use a "Select Subprocess" and inside of it I have different  "Optimize Grid" operators containing different classifiers (e.g, LogReg, RandomForest, SVM etc). After this long run, I compare the ROC's for the different classifiers obtained with the the best set of parameters found by the "Optimize Grid" operators.

     

    But before all this, I am doing some crude feature selection with "MRMR-FS" where I choose a fix number of attributes to pass to the "Select Subprocess". In this step, I would like to use a robust approach like the one you suggested. Thats where I am concerned, because the "multi-objective-feature-selection" will already depend on the classifier and on its parameters, that I find just after I did the feature selection.

    Could you please let me know what you think?

    Thanks very much indeed!

    Amaury

     

  • meloamaurymeloamaury Member Posts: 8 Contributor II

    Hi @IngoRM,

     

    Thanks a lot for your answers/suggestions, very much appreciated. I will try different schemas and see how they perform, indeed the run time is going up considerably but I think it is still manageble.

    Merci!

    Amaury

  • meloamaurymeloamaury Member Posts: 8 Contributor II

    Hi @IngoRM,

    Sorry to disturb you again. But you mention that:

     

    I personally use an iterative approach most of the times.  Filter some features out.  Find some good model candidates.  Optimize parameters a little bit.  Run a feature selection.  Then optimize parameters further and so on...

     

    Would you have a RM process that do this iterative approach automatically? I am very curious to know how would you build such a process. If you have it and if you can share it with me I will try to modify for my own problem.

    Thanks very much in advance!

    Amaury

  • IngoRMIngoRM Administrator, Moderator, Employee, RapidMiner Certified Analyst, RapidMiner Certified Expert, Community Manager, RMResearcher, Member, University Professor Posts: 1,751  RM Founder

    No, unfortunately not.  I am sure one could build such a process.  But actually I prefer to see the intermediate results and base some detailed decisions on those which will adapt the next steps of the iterative process.  That's why I keep the parts separated.  I also often find that you need to make quite a lot of adaptions for data prep for each phase of the process since data is always somewhat different :-)

     

    On a related note: I will do a meetup in NYC next week on the topic of multi-objective feature selection.  Details are here: https://www.meetup.com/RapidMiner-Data-Science-Machine-Learning-MeetUp-New-York/events/245644467/

     

    I also will do a webinar on the same topic on January 24th 2018.  Details are here: https://rapidminer.com/resource/webinar-better-machine-learning-models-multi-objective-optimization/

     

    Cheers,

    Ingo

     

     

    sgenzer
  • meloamaurymeloamaury Member Posts: 8 Contributor II

    Thanks a lot for the info. Yes, I am registered for the webminar.

    Best,

    Amaury

    sgenzer
Sign In or Register to comment.