New Extension: Interpretations - SHAP, LIME and Shapely

sgenzersgenzer Administrator, Moderator, Employee, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM Moderator Posts: 2,959 Community Manager
Dear Community,

I am happy to announce that @pschlunder and I published a new extension to the marketplace: Interpretations! 

So far RapidMiner users had the option to use Explain Predictions as their method to understand WHY a model predicted the way it did. The Explain Predictions operator uses an algorithm by @IngoRM which is focused around best speed, understand-ability and application on a range of use cases as well as data types.

The new operator adds the known algorithms of LIME, Shapely and SHAP to the mix. The operator Generate Interpretations has a very similar interface to the familiar Explain Predictions. In fact it also embeds Explain Predictions so that you can switch between different algorithms and get different ‘opinions’ on your predictions.

 

Please be aware that this is a first alpha release of the extension. We are continuously working on improving it. We appreciate every feedback!

 

Thanks!
Philipp & Martin


Comments

  • lionelderkrikorlionelderkrikor Moderator, RapidMiner Certified Analyst, Member Posts: 1,195 Unicorn
    Hi @pschlunder,Hi @mschmitz,Hi @sgenzer

    First, thanks for this new extension !
    I briefly tested the extension and it seems that only a part of the importance of the regular attributes are displayed in the column "interpretation" in the resulting example set provided by the output of the Generate Interpretation operator.

    For example, in the provided process which uses the "Golf dataset", there are 4 regular attributes BUT
    only 3 "importance" are displayed in the column "importance" of the resulting example set.

    Please run the process in attached file to observe this phenomenon.

    Regards,

    Lionel
Sign In or Register to comment.