Lasso

BenBen Member Posts: 13 Contributor II
edited June 2019 in Help
Hi,
I was wondering whether there exist any Lasso-implementations in RM? (least-angle-regression) Or am I just too blind to find one?

Greetings
Ben

Answers

  • landland RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 2,531 Unicorn
    Hi Ben,
    it's currently not included, but we are planning to implement it in near future. The problem is that the optimization is not that easy as for example the ridge regression or simple linear regression: It does not collapse to a closed form and hence needs some more complex iterative optimization. If you have a suggestion or implementation at hand, feel free to contribute :)

    Greetings,
      Sebastian
  • BenBen Member Posts: 13 Contributor II
    FYI, I have now implemented LARS (Least Angle Regression). I will include LASSO-modifications the next days and will release the operator approximately next week in the new version of my Microarray Feature Selection Plugin.

    Ben
  • BenBen Member Posts: 13 Contributor II
    And there it is:
    Least Angle Regression (LARS) and LASSO ready to serve you at:

    https://sourceforge.net/projects/rm-featselext/

    It's a pretty straight-forward implementation of the LARS-algorithm described in Least Angle Regression by Efron et al. 2004 in "The Annals of Statistics".
    It works so far, but I have not (yet) included the speed-optimization described in a later section of the paper.

    Enjoy,
    Ben
  • dragoljubdragoljub Member Posts: 241 Contributor II
    Sweet thanks for the great work!

    -Gagi  ;D
  • wesselwessel Member Posts: 537 Maven
    Cool.

    I also really like the added screen shots.

    image
  • dragoljubdragoljub Member Posts: 241 Contributor II
    Your plug-in looks great. I am considering using your RFE operator, however I am not 100% sure how it differs from the optimize weights using backward elimination.

    It seems that RFE reduces the selected features by removing the least weighted features and re-weighting. Backward elimination seems to do the same thing (drop worst features) however is more tied to performance of a specific model and will likely blow up since many combination of dropped features must be tried.

    Any input is appreciated.  ;D

    Thanks,
    -Gagi
  • dragoljubdragoljub Member Posts: 241 Contributor II
    I have another question. Does the RFE operator also handle +/- feature weights. For example the SVM operator will provide the actual +/- weights where as the weight by SVM operator will provide normalized weights. Do we need normalized weights to use RFE?  :D

    Thanks,
    -Gagi
  • BenBen Member Posts: 13 Contributor II
    I had to check on this :-)

    So for versions =< 1.0.3 the following holds

    The SVM-RFE operator uses the absolute weights of the inner SVM.
    In detail: An invisible SVM-Weighting-operator calculates the weights. They may contain negative values.
    Then the weights are normalized. The normalization-method maps the absolute values to the range [0..1]. That means negative weights with a large absolute value will a high value  (-> 1) after normalization. see AttributeWeights.normalize() for details.

    The RFE operator with arbitrary subprocess does not (yet) use absolute values. This means that if you want negative weights with a high absolute value to remain in the selection, you must normalize them or convert them to absolute values.

    Wait a second.
    ...
    I added a parameter "use_absolute_weights" to the RFE-operator. It is availalbe in the SVN version (trunk) but not yet released as prebuilt JAR.



    Regarding the question of your previous post RFE vs. Backward-Elimination: They are quite different! :-)
    First, the similarity. BE and RFE both remove features from the full set of features, BUT:
    In every round BE has to evaluate the prediction performance of removing each feature by cross-validation or bootstrap or the like. That's very expensive.
    RFE in contrast has only to calculate one weight-vector per round and the removes the feature(s)* with the smallest weight.
    The RFE operator usually removes several attributes in one round whereas the BE-operator only removes one feature each round (I'm not 100% sure about that).


    Merry Christmas,
    Ben
  • dragoljubdragoljub Member Posts: 241 Contributor II
    Thanks Ben!

    I will have to get the latest version once you build the jar. For now I will use the weight by SVM for normalized weights.

    Thanks and Merry Christmas,
    -Gagi
  • dragoljubdragoljub Member Posts: 241 Contributor II
    Hey Ben,

    Is it possible to return the non normalized weights at the end of your operator. Say I want the 10 best features after RFE but I also want to know the relative ranking of these 10 features. It would be great to output the actual weights of the final iteration rather than just 0/1.  ;D

    Thanks,
    -Gagi
  • BenBen Member Posts: 13 Contributor II
    Hi!

    I'm sorry, but that's not possible at the moment and would interfere with the purpose of feature selection. I think you have to apply another round of SVM-Weighting on the selected features.

    I'll let you know, if I find a workaround for that.

    Ben
  • dragoljubdragoljub Member Posts: 241 Contributor II
    OH OK that makes sense. I will give that a try.

    Thanks Ben!
    -Gagi
Sign In or Register to comment.