Compete in RapidMiner's 3rd Competition: Fantasy Football. Top prize is $750. Deadline December 19.
Download RapidMiner Studio or Server 8.0 Public Beta. Let us know how you like it! Ends November 27.
Watch RapidMiner's "Getting Started" videos on YouTube. Everything you need to do data science - fast and simple!
Hi all, hi Marco!
We were wondering how RapidMiner Server handles the reading and writing of big big objects from and to the Server Repository.
Say, we write an ExampleSet or a big model (e.g. a complex RandomForest model) of 2 GB to the Server repository. Does the Server cache the complete object in memory, or does it stream to the database? What when we read it back?
In other words: if the memory of the server is restricted to 2 GB, can we still reliably store bigger objects in the repository? (whether this is good practive is another question, but sometimes you have no choice...)
Also, does accessing the repository count against the api limit of the free RapidMiner Server, or does the api limit only apply to processes that are exposed as a webservice?
Solved! Go to Solution.
Good to hear from you :-)
Unless I am corrected by one of your Server experts, I think the answers to your questions are "yes" and "no". Yes, you can write larger objects to the repository as part of the process execution on Server. If the result is a data set though and you read it back into a free RapidMiner Studio still the row limit would apply though.
And no, the repository access does not count against the API limit. Only web service calls do.
Let me provide a few more details here:
Hi Ingo, hi Marco,
thanks for your replies! That answers all my questions
So basically, if you need just the former Collaboration Tier, the Free server will do, unless you train overly complex models or otherwise create big IOObjects. The real use of course only comes in when you can also execute background and heavy duty jobs on the server, so the limit of the Free edition will quickly be reached...