Options

All supervised models should, if possible, return attribute weights

yzanyzan Member Posts: 66 Unicorn
edited December 2018 in Product Feedback - Resolved

All supervised operators should, if meaningful, return attribute weights representing the feature importance. If nothing else a decision tree and perceptron could get it.

0
0 votes

Fixed and Released · Last Updated

Comments

  • Options
    sgenzersgenzer Administrator, Moderator, Employee, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM Moderator Posts: 2,959 Community Manager

    hello @yzan - can you please give us an example to replicate?

     

    Scott

  • Options
    yzanyzan Member Posts: 66 Unicorn

    An example of a supervised operator, which returns attribute weights, is "Generalized Linear Model".

     

    The calculation of weights for a decision tree:

    1. It is possible to simply return a vector, which can take values {0,1} based on whether the attribute is used in the tree, or not.
    2. Or http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html
    3. Or http://support.sas.com/documentation/cdl/en/stathpug/68163/HTML/default/viewer.htm#stathpug_hpsplit_details30.htm
    4. Or http://support.sas.com/documentation/onlinedoc/miner/em43/allproc.pdf (pages 54-56).

    For a perceptron, the returned attribute weights could correspond to the weights of the perceptron (they are already visible in the "model" output, but they are not immediattely passable to operators like "Select by Weights").

  • Options
    sgenzersgenzer Administrator, Moderator, Employee, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM Moderator Posts: 2,959 Community Manager

    thanks for that, @yzan.  Just heard back from dev team that this is coming soon.  :)


    Scott

     

  • Options
    sgenzersgenzer Administrator, Moderator, Employee, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM Moderator Posts: 2,959 Community Manager
  • Options
    yzanyzan Member Posts: 66 Unicorn

    Possibly even "Deep Learning" could return attribute weights as the backend H2O implementation provides this information and other algorithms from H2O, like GLM and GBT, already output attribute weights.

  • Options
    CraigBostonUSACraigBostonUSA Administrator, Employee, Member Posts: 34 RM Team Member

    Update: As of version 8.0 Decision Tree and Random Forest now provide a new port that outputs feature weights.

     

    https://docs.rapidminer.com/la


    @yzan wrote:

    All supervised operators should, if meaningful, return attribute weights representing the feature importance. If nothing else a decision tree and perceptron could get it.



    test/studio/releases/changes-8.0.0.html?_ga=2.83072976.793993492.1515416834-774805979.1445867999

  • Options
    sgenzersgenzer Administrator, Moderator, Employee, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM Moderator Posts: 2,959 Community Manager

    ?

Sign In or Register to comment.