# The new Get Local Interpretation Operator...

Colleagues:

I have been experimenting with the Get Local Interpretation operator introduced in 7.6 that is included in the RapidMiner extension "Operator Toolbox" under the "Models" sub-folder. I have tried using this operator with a grouped model (normalisation and kNN model) and get an error. The error (screen shot attached) states that I am passing the wrong type of input to the operator.

With a kNN, I have heard that it is best to normalize the input data, and then group the normalization model with the kNN model using the Group Models operator. The error message seems to indicate that the Get Local Interpretation operator cannot accept Grouped Models as an input.

I would be grateful for any advise as to how to get around this if at all possible given the usefullness of the Get Local Interpretation operator, and given that it is sometimes neccessary to normalize the input data that will be processed by a learner.

Best wishes, Michael Martin

## Answers

1,761Unicorn125UnicornHallo Thomas:

Thanks for your reply. I think the attached screenshots may help. The mod input port for the Get Local Interptretation operator is receiving a Grouped Model (from a Multiply operator). The error seems to be related to the fact that the GLI operator wants a Prediction model, not a Grouped Model that includes a Prediction Model.

If you have any suggestions as to how I could work around this, I would be grateful as it is sometimes important to normalize the data, especially with a kNN Learner and within a Cross Validation operator when you want the data in each fold normalized seperately before the data is fed to the Learner.

Thanks for considering this if you get a chance, and best wishes,

Michael

3,517RM Data ScientistDear Michael,

i am the author of the operator and messed it up. The operator expects a prediction model, which is different from a Group (or meta) model.

I will check how to fix this.

for now: Normalize the data before GLI and just use the pure k-NN in it. That should work.

Best,

Martin

Dortmund, Germany

3,517RM Data ScientistDear Michael,

i've sent you an updated version of toolbox which is able to handle GroupedModels.

A bit on the background: RapidMiner has a Model class (AbstractModel) which has various child classes. I expected to get a PredictionModel. Most RM Models are implementations of this. I now learned that GroupedModel (and some others) aren't. I've switched the implementation so that you can add any (Abstract)model.

Downside: You can now connect also models which make no sense (Normalization, Nom2Numeric...). I need to figure out a way how to check if the connected model creates a label. Not sure how yet. In the version i shared it simply crashes in the inner operator.

Best,

Martin

Dortmund, Germany

125UnicornLieber Martin:

Vielen dank fuer Deine Meldung - und auch vielen dank fuer die Erklaerung! ;-) I checked for an update to the Toolbox extension in the Marketplace, I don't see anything in my email inbox. How can I update the Toolbox to the new version?

I will then try it out and let you know how things go. GLI is a very useful operator - thank you for developing it! I have used GLI in sales presentations (that visualized outputs in Tableau) and business people really like it. The examples I used were based on Decision Tree learners. One thing I want to do is feed various Decision Tree model outputs to GLI and Tree to Rules and see what is the same and what is different.

MfG, Michael ;-)

3,517RM Data ScientistHi Michael,

what you can do is use an optimize in the GLI and take the DecTree which describes the best? Is that what you need?

Once I managed to make GLI runnable on single examples one might simply use a loop over models around..

Best,

Martin

Dortmund, Germany

125UnicornHallo Martin:

I just tested a process in which I normalize the data (0-1 and not z score, which produces negative values, which GLI seems not to like) before feeding the data to Optimize Parameters, etc. I used the configuration in one of the tutorials (Use Weight by Gini Index) in the help for the GLI operator within my process.

The process runs through OK, but I get no "Decision Tree Path" (the field is blank for all rows in the dataset) in the output. I do get Importance values for various attributes.

What I would like to be able to do is pass my grouped model to the GLI operator as per my original post. If I understood you corrently, there is now a version of the GLI operator that would allow me to do this. Is there an updated operator than I can get access to?

I can also see that I should do some studying to make sure I am using the GLI operator correctly. It enables great outputs, but the driver (me) has to drive the car (the GLI operator) correctly!

Best wishes, Michael ;-)

125UnicornHi Martin:

I now get a Decision Tree Path in the test process I wrote about a little while ago - I was not using enough attributes to generate one. As I know that data very well, I can tell that the Decision Tree Path is not a complete map of the data - but I am sure that is because I need to do some optimising of the learner within the GLI operator.

If there is a way for me to pass a Grouped Model to the GLI, that would be great. ;-)

MfG, Michael

3,517RM Data ScientistHi Michael,

check your Private Messages. I've sent you a version with a working GLI for grouped models. I will investigate the issue with negative values.

Best,

Martin

Edit: Negative Data works good fo rme? See attched example process.

Dortmund, Germany

125UnicornHallo Martin:

Thank you for your latest message - I will look at the process you shared. I checked my private messages (in my profile) re the GLI operator, there is one private message from you, but it is a message from last summer - perhaps I am looking in the wrong place for the new message?

Please let me lknow where I should also look when you get a chance.

MfG, and thanks,

Michael ;-)

125UnicornHallo Martin:

Thank you for your reply - will look at the process you sent. ;-)

I did check private messages in my profile - there is a message from you from last summer, but no new message re: the GLI operator.

Is there another place on the web site where I should check for private messages?

Thanks for letting me know when you get a chance.

MfG, Michael ;-)

125UnicornHallo nacheinmal, Martin:

Thank you for your process which I have looked at, and yes, there are no problem with negative values. I need to look back at the process I did to make sure I am doing things correctly - I very well may have screwed it up.

Look forward to working with the new GLI operator, as it is very important and meets a real need. Thanks for all of the time you spent developing it. ;-)

MfG, Michael

125UnicornColleauges:

I am now using version 0.5.1 of the Get Local Interpretation operstor, which allows Grouped Models to be input to the incoming mod port of this operator - works great. I

have put an Optimize Parameters Grid operator inside the GLI operator in order to optimize the model that generates the Interpretations.

The paramters I set up for a Decision Tree within the Optimize Paramters Grid involves about 5900 combinations of parameter settings. When I run the process, I notice that Optimize Paramters operator within the GLI operator runs multiple times - as if its running within a loop. The result is that it takes Optimize Parameters several hours to run before GLI can finally deliver the Interpretation. I'm now at 6 iterations of the Optimize Parameters operator and counting - which adds up to approximately 30,000 combinations being tested insteas of the original (approximately) 5900.

Can anyone explain why this is happening? Is there a parameter I can set that controls how many times Optimize Parameters Griud will loop within a Get Local Interpretation Operator?

Thanks for considering this, and best wishes, Michael

3,517RM Data ScientistMichael,

the reason for is is, that GLI is a loop. In order to get a Local Interpretation we build one model for every data point. Think about a local model as a local approximation, similar to a taylor expansion around a point. This local model needs to be calculated for every data point in your example set. That's why you calculate #examples*#optimization_steps decision trees.

There are two features i like to implement to do it faster:

I will share a slide deck with more details privately. I think that will help. I need to turn this one into a blog post / video at some point

Best,

Martin

Dortmund, Germany

125UnicornHi Martin - thank you for your reply which clears up some questions - and thanks for sending along the slides when you have a chance. I do have one remaining question at this point. I am feeding the GLI operator a grouped model. The grouped model contains a normalization operator. I am generating predicitions for data the model hasn't seen yet and then feeing the model and the predictions to the GLI operator.

Since the grouped model contains a normalize operation, the (labeled) data coming into the GLI operator has already been normalized. When I look at the decision tree path generated by the GLI operator (took 7 hours and 43 minutes)

Should I feed a copy of the new data that has not yet been normalized into the GLI operator (i.e. the same data that was fed into the model to generate predictions) anong with the grouped model?

I ask as when I look at the decision tree path, the values in the text string decscribing the path do not map to the values of the data fields the decision tree path is describing. I am probably doing someting wrong.

Attached are three files that show the process: 01_GLI_Top_Level_Process.jpg shows the process at the top level, 02_GLI_Process_Middle_Level.jpg shows the process at the middle level, and 03_GLI_Process_Inner_Level_.jpg shows the process inner layer.

You will see in the "middle level" screen shot that I used the Remember operator to Remember the optimal paramter settings from Optimize Parameters Grid and then Recall them as none of the three outgoing ports of the GLI operator would accept the Parameters. If I am mistakern about this point, please correct me.

Thanks for considering this when you have a chance, and I look forward to seeing the slide deck you mentioned. The GLI operator is a really important one, and I look forward to using it often - I just need to use it correctly!

Best wishes, Michael

125UnicornHi again, Martin: Just realized how silly my suggestion about reading in the new data a second time was (haven't had my first cup of coffee yet) - as I think the GLI operator needs the predictions made be the grouped model in order to correlate field values to the prediction for each data row. Therefore it looks like I have to run a query that merges the predictions made by vthe grouped model with the non-normalized data the grouped model used to make the predictions, and then feed that dataset into the GLI operator. Does this make sense? Will try and let you know. MfG, Michael

3,517RM Data ScientistHi,

i think you found a bug in my quick fix. I guess i use the grouped model in every iteration. So in the second iteration you might have a 2x normalized data set.

I am Out of Office next week. So this sadly needs to wait one week.

Cheers,

Martin

Dortmund, Germany

125UnicornHallo Martin:

Kein Problem - Ich bin grundsaetzlich ein geduldlicher Mensch..... ;-)

In the mean time, I rigged up a process that feeds the grouped model predictions and the de-normalized original data that was fed into the grouped model to generate the predictions to GLI operator and am checking the results - we can discuss after you're back and settled.

Thanks for all of the correspondance, will review the slides cartefully, and wish you a great week (Ich hofe im Urlaub!).

MfG,

Michael ;-)

3,517RM Data ScientistHi @M_Martin,

i've checked the issue. If you use a grouped model which includes a normalization, the trees will always have their cuts in this normalized space.

This is somewhat what i would expect in this case. It's kind of hard to not do this.

If you want to have the tree on a denormalized ExampleSet you can of course get the de-normalization model into GLI. Afterwards you can then denormalize right in front of the DT. See attached process.

I could simply add a new input port for additional data to avoid the Remember/Recall shenanigans. Would this work for you?

Best,

Martin

Dortmund, Germany