How to avoid out of memory when running FSSs?

IngoRMIngoRM Administrator, Moderator, Employee, RapidMiner Certified Analyst, RapidMiner Certified Expert, Community Manager, RMResearcher, Member, University Professor Posts: 1,751 RM Founder
edited November 2018 in Help
Original message from SourceForge forum at http://sourceforge.net/forum/forum.php?thread_id=2031544&;forum_id=390413

Hi: I have a microarray dataset with 7079 attributes (a lot of them). So, when I'm trying to execute any FSSs operator to reduce the dimensionality the RapidMiner is not able to execute the algorithm, a out-of-memory error message appears. I have 2 MB of RAM and set the virtual memory to 4 MB. Is it possible to execute RapidMinner's search algorithms using a such high dimensional problem? Any technical suggestions to solve this situations are welcome, please? Gladys


Edit by Gladys:

Sorry, I mean 2 GB of RAM, not MB. I use Windows XP


Answer by Ingo Mierswa:

Hi Gladys,

first try for memory problems in feature selection should be to reduce the number of individuals, e.g. using a 1+1 genetic algorithm using only one individual. This should always work when the data fits into memory. Other approaches like Forward Selection or Backward elimination will hardly work on data sets with this amount of features. You could also try to apply a feature weighting first and (moderately) filter out features by means of the AttributeWeightSelection operator. Search in the forum for this, there were several discussions on selecting features for high-dimensional data sets.

Cheers,
Ingo
Sign In or Register to comment.