Deployment of RapidMiner on AWS/ Azure Cloud

RG2000RG2000 Member Posts: 3 Learner I
edited May 11 in Help
I am exploring using either the AWS or Azure Cloud instances. I wanted to confirm on the following 
a) Access to publishing from studio to AWS or Azure Cloud is not available for free studio
b) Are there restrictions on number of models that can be hosted- any linkage between number of models and server size
c) Are these models published on 'hot mode'- or on 'cold mode'- to be brought up when called
d) wanted to understand security capabilities, logs 
Where would documentation for cloud based services be available?


  • rfuentealbarfuentealba Moderator, RapidMiner Certified Analyst, Member, University Professor Posts: 340   Unicorn
    Hello @RG2000,

    This message is for @sgenzer or @leti probably. Asking them for their support here. I can only answer b) and possibly c):

    b) server size doesn't have anything to do with the number of models, but it does have to do with the size of your data, probably.
    c) both modes are supported, as far as I know.

    All the best,

  • RG2000RG2000 Member Posts: 3 Learner I
    @leti, @sgenzer

    Could you share details around (c) and (d). Any documentation would be great
  • letileti Employee, Member Posts: 22  Maven
    All of our server documentation fro Azure and AWS can be found here https://docs.rapidminer.com/latest/server/installation/cloud_images.html and security documentation can be found here https://docs.rapidminer.com/7.6/server/administration/security/
  • sgenzersgenzer 12Administrator, Moderator, Employee, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM Moderator Posts: 2,261  Community Manager
    hi @RG2000 I'm sorry for the delay but I was away last week. Let me see if I can answer some things here...

    (a) I don't see any reason why a local installation of RM Studio (free edition) cannot connect to a cloud install of RM Server. I just tested connecting a free local RM Studio license with one of our AWS RM Servers and it works fine:

    (b) there is no limit per se to the # of models on one server. Obviously you need to size your server to what kind of models you're running. You can probably run 100 Titanic Naïve Bayes models on one server every minute if you want, whereas running one 1.5M row x 100 column data set with DL or GBT is going to eat up some serious resources.

    (c) you can have models built on a reoccurring schedule (CRON jobs) or on-demand as an API call.



Sign In or Register to comment.