Due to recent updates, all users are required to create an Altair One account to login to the RapidMiner community. Click the Register button to create your account using the same email that you have previously used to login to the RapidMiner community. This will ensure that any previously created content will be synced to your Altair One account. Once you login, you will be asked to provide a username that identifies you to other Community users. Email us at Community with questions.
How do I load the data stored by a store operator outside RapidMiner?
naveen_bharadwa
Member Posts: 9 Contributor I
Hey,
I have stored object that's around 10GB in size. When I try to process on this object, RapidMiner stops and responds back saying there isn't enough memory to continue this process. Is there a way I can load the data into a python/java object so that I can perform the operations I want on it.
Regards,
Naveen
0
Answers
What type of object is it? There have been a few related threads on this topic recently. Assuming this is a dataset of some kind, and you don't actually need it all in memory at the same time, your best option would be either to split it into separate smaller sets (csv files, database tables, and then use one of the Loop operators to cycle through them. But if you actually need everything in memory at the same time (e.g., for model training) then you are probably going to have to set up RapidMiner Server or use RapidMiner Cloud to take advatage of a machine with more available RAM.
Lindon Ventures
Data Science Consulting from Certified RapidMiner Experts
This is actually the result of the apply model operator. This is the data I need to make inferences on. I've tried splitting the stored object, but it fails even before loading the entire object into the memory. So, I was wondering if there was a way to load the locally stored objects and parse it to get the data out in some form by removing the concepts that I don't want.
I don't think there is a way to do this splitting during the apply model. But why not split the data before you apply the model? Then you can remove extra attributes that are no longer needed after scoring and join/append all the results back together again.
Lindon Ventures
Data Science Consulting from Certified RapidMiner Experts
@naveen_bharadwa I think what @Telcontar120 is alluding to is trying to be thrifty with your data analysis. A 10GB data objects sounds like it's filled with a lot of unnecessary 'stuff.' Have you checked your process to see where you can cut unnesscary data? If you really can't, then you're starting to move into the Hadoop/Radoop area with analysis, so you might consider using that route.