I have a set of 27 attributes (which after transformation to become binomial 128) and 14000 records.
I tried to run the fp-growth algorithm followed by create association rules, but he runs out of memory (my computer has 4GB of RAM and is 64 bits). I'm already using the maximum capacity (already changed the script RapidminerGui.bat).
But what I do not understand is that the same process has been run with 100000 records and double attributes, with a much smaller support.
Has anyone experienced a similar problem?
Have a suggestion to solve this problem?
This is a common problem if you are looking for fequent itemsets. All algorithms are sensitive to the min support and to be honest: 4 GM RAM is not much memory nowadays . So your only way is to carefully increase the min-support or use other constraints as the "max items". Good values depend highly on your dataset. So, even if you have more records and more attributes, the data can contain less frequent itemsets and therefore will not blow up your machine.
I increased min_sup and managed to run the process with this amount of records ... Up to this point everything went right.
However, I decided to do another test. And I have a question ???:
My computer is running a process there are 40 hours, did not show the warning that popped the memory capacity. However the runtime process does not go continuously (without moving time is about 3 hours and then updates the time, so I know that is already running for 40 hours).
Does RapidMiner process is running?
Worth leaving more time running?