Contributor

# Dealing with collinearity

Hi,

I am having difficulties with my data. I have 96 attributes and I need to complete a scientifically robust method for checking collinearity between the attributes. I have been fiddling with the 'remove correlated attributes' operator.

I have a few questions pertaining to this:

a) In the situation where you have 3 attributes and 2 are highly correlated to the third what is the criteria in which this operator selects an attribute?

b) I want to remove attributes which have a correlation equal to or greater than 0.75. But, this needs to apply to both positive and negative correlations meaning if a correlation is equal to -0.83 I need this to be removed also. How can I get this operator to apply these requirements?

If there are any suggestions of better methods that Rapidminer is capable of for dealing with collinearity I would also appreciate any further suggestions.

Thanks,

Chris

4 REPLIES
Community Manager

## Re: Dealing with collinearity

hello @chris92 - if it were me, I would begin with the Correlation Matrix operator.  You will see all your r values in a nice chart which will help a lot.  In VERY general stats terms, the higher the abs(r), the higher the correlation.  Lots of stats materials will explain this very well.  As for negative values, you generally use r^2 which is conveniently a parameter of the operator.

Scott

Scott Genzer
Senior Community Manager
RapidMiner, Inc.
Elite III

## Re: Dealing with collinearity

If you turn on expert parameters, "Remove Correlated Attributes" will actually handle both of your questions as well.

First there is a parameter to use absolute correlations, which handles the positive and negative values (it is on by default).

There is also a parameter to specify which attribute is kept when it finds a set of correlated attributes.  Your options are based on the order that the attributes appear in your dataset, and you can choose to keep the first, the last, or a random one.

Brian T., Lindon Ventures - www.lindonventures.com
Analytics Consulting by Certified RapidMiner Analysts
Contributor

## Re: Dealing with collinearity

Thanks very much for the speedy responses.

I have one follow up question, is there a different operator that will select which attribute is removed based on its correlation to a target variable rather than selecting original, random or reverse options? In my problem I do not want to remove potentially important attributes. So, essentially I need an operator that identifies correlated attributes and then based upon which attribute has a stronger relationship with the outcome variable the attribute that provides the weakest relationship would then be removed. Does such an operator exist?

Thanks,

Chris

Highlighted
RMStaff

## Re: Dealing with collinearity

Dear Chris,

first of all you should think about if you are really interested in collinearity or in dependencies. Usual data science tasks are not linear. So why do you want to focus on linear assumptions?

Second, have a look at: http://community.rapidminer.com/t5/RapidMiner-Studio-Knowledge-Base/Feature-Weighting-Tutorial/ta-p/... it gives you quite some options.

My proposal: Weights of Logisitc or Linerar Regressions.

~Martin

--------------------------------------------------------------------------
Head of Data Science Services at RapidMiner