‎09-16-2017 07:15 PM

Keras is a high level neural network API, supporting popular deep learning libraries like Tensorflow, Microsoft Cognitive Toolkit, and Theano.

The RapidMiner Keras extension provides a set of operators that allow an easy visual configuration of Deep Learning network structures and layers. Calculations are pushed into the Python-based backend libraries, so you can leverage the computing power of GPUs and grid environments. 

The extension makes use of an existing Keras installation. This article shows how to do a simple deployment of Keras and how to configure the Keras extension to connect to it.


Let's review several options:


Anaconda on MacOS


Warning: As of version 1.2, TensorFlow no longer provides GPU support on macOS. 

  1. Download and install Anaconda from: https://www.continuum.io/downloads#macos
  2. Create a new environment by typing in command line: conda create –n keras
  3. Activate the created environment by typing in the command line: source activate keras
  4. Install pandas by typing in the command line: conda install pandas
  5. Install scikit-learn by typing in the command line: conda install scikit-learn
  6. Install keras by typing in the command line: conda install -c conda-forge keras
  7. Install graphviz by typing in the command line: conda install –c anaconda graphviz
  8. Install pydotplus by typing in the commandline conda install –c conda-forge pydotplus
  9. In RapidMiner Studio Keras and Python Scripting panels in preferences, specify the path to your new conda environment Python executable.


You’re good to go!

Anaconda on Windows


Warning: Due to issues with package dependencies, it is not currently possible to install graphviz and pydot in a conda environment on Windows, and consequently to visualise the model graph in the results panel.


  1. Download and install Anaconda from: https://www.continuum.io/downloads#windows
  2. Create a new environment with Python 3.5.2 by typing in command line: conda create –n Python35 python=3.5.2
  3. Activate the created environment by typing in the command line: activate Python35
  4. Install pandas by typing in the command line: conda install pandas
  5. Install scikit-learn by typing in the command line: conda install scikit-learn
  6. Install keras by typing in the command line: conda install -c jaikumarm keras=2.0.4
  7. In RapidMiner Studio Keras and Python Scripting panels in preferences, specify the path to your new conda environment Python executable.


You’re good to go!





  1. Download and install Python 3.5.2 from: https://www.python.org/downloads/release/python-352/

Only python 3.5.2 works for windows.

  1. Install numpy with Intel Math Kernel library.
  1. Install pandas from the command line: pip3 install pandas
  2. Install graphviz from the command line: pip3 install graphviz
  3. Install pydot from the command line: pip3 install pydot
  4. Install TensorFlow.
    • If you would like to install TensorFlow with GPU support, please see the instructions here: https://www.tensorflow.org/install/install_windows
    • If you would like to install TensorFlow only with CPU support, from the command line run: pip3 install –upgrade tensorflow
  5. Install Keras from the command line: pip3 install keras


You’re good to go!




RapidMiner extension


  1. Install the Keras extension from the RapidMiner Marketplace
  2. Install RapidMiner Python Scripting extension from the marketplace if not already installed.
  3. Restart RapidMiner Studio.
  4. Inside your Studio Client go to Settings (Menu) > Preferences and navigate to “Python Scripting” tab/page on the left. Provide path to Python executable and click test to ensure it is successful.
  5. Inside your Studio Client go to Settings (Menu) >Preferences and navigate to “Keras” tab/page on the left. Provide path to Python executable and click test to ensure it is successful.


Try out a few sample processes from the “Keras Sample” in the repository view.





I'm getting some "script terminated" early errors. Running on Ananconda for Windows and followed the install. Happens to all the Keras examples I try.

I have tried it on Anaconda / Tensorflow / Keras with Ubuntu 16.04. Works very well, except that placing breakpoints interferes with the closing of Keras processes, which results in turning numeric predictions into nominal and ends up in error (only in the s&p example).

Worth reminding the potential users that if you follow the Tensorflow installation instructions then you end up installing Keras in a Tensorflow environment. It is then imperative that the RapidMiner preferences select the location of Python in that environment rather than in the default Anaconda ROOT location.

Note that most of the issues listed here have been addressed in Keras 1.0.1, which was released only one week after that original post!


Some preliminary observations on the first release of Keras extension, which I've tested on Ubuntu 16.04 with Anaconda 4.3, Python 3.5, Keras with Tensorflow back end. I have used King County data set from Kaggle to test it all out.


  • This extension is a fantastic start and so the following critical comments are here only to assist in further improvement of the extension
  • Only Keras sequential models are available
  • RM would be ideal for functional models, which are currently not supported
  • Keras Model does not allow to take separate training and validation data sets (commonly practiced in Python community) and forces you to rely on the validation split option, what it means is that many pre-defined data sets cannot be tested on in RM (such as those from Kaggle)
  • Keras Model has no local random seed to be set but this would be very useful for repeatability
  • Keras Model seems to be running on a CPU or on one GPU only, there is no way of controlling which GPU is to be used and to switch to another at any point in processing
  • Keras Model predefined metrics are suitable for classification only, however, you need to include metrics for regression style measurements, e.g. "mae" which works well when it is "injected" directly into the ".rmp" file
  • Keras Model default metric "None" causes a syntax error
  • Keras Model allows only one metric to be defined
  • Keras Model should output the history of training and validation loss and of any other metrics defined (obtained from model.fit, history variable)
  • Apply Keras Model randomly changes numeric predictions into nominal, so you need to pass the data through Guess Types operator, which seems to work well so far
  • Standard performance measures handle all output from Apply Keras Model
  • Tensorboard callback hangs when it has no write access to the directory (on Linux RM runs as a separate process, so it will not have access to the created folders)
  • Tensorboard callback works great, just remember to use validation split to get validation metrics displayed
  • ModelCheckpoint works just fine
  • EarlyStopping callback also works well
  • There is no Model Save / Model Save Weights or Model Load / Model Load Weights operators (also any of the model to / from json operators for inter-connectivity)
  • There is no way to utilise models saved as HDF5 checkpoints, I could not even read the HDF5 checkpoint files back into RM anyway
  • However, the Keras model can be stored and then retrieved back (as KerasIOModel) to be used by Apply Keras Model operator
  • No Keras text pre-processing, I assume RM text processing is to be used instead
  • No Keras image pre-processing, and no obvious replacement in RM for this
  • No Keras batch by batch processing (model.train_on_batch), which could take advantage of multiple GPUs or appending of batches to the current GPU memory
  • Also not sure if Keras Model could accept anything but a data frame, if so it is curious why the model requires the input shape (the shape for a data frame is always standard)
  • It would be great for Build Keras Model to guess the shape for a data frame input rather than being constantly caught by incorrect number of attributes passed
  • Currently all parameters are lines to be passed into Python, if there is any mistake made a syntax error in Python is generated
  • If the package is to stay as a Python interface then it would be great to include a field to accommodate own Python code which could be loaded prior to Keras operators

Great work Keras team -- Jacob


P.S. Results on the King County house price prediction are not that great but plenty of scope for further improvement, for your reference: RMSE=$76,966, MAE=$54,851 +/- $53,991, Corr=0.921, which was unexpectedly a better result than (perhaps) better suited for the task H2O Gradient Boosted Trees. Try to improve if you can! 

Thanks Jacob for the detailed write up. FYI to anyone experimenting.... I did dive into this with a working Tensorflow 1.3 and Python 3.6.1 Anaconda installation. I have had no problems or errors running the examples from Rapidminer. 


There may be other issues that show up later but so far so good. New users will have to look at this or they will be lost.




RM Staff RM Staff
RM Staff

@jacobcybulski i was the person in charge of developing the keras extension. thank you for the feedback! it'll be very useful--we're getting to work on your suggestions. let me also answer a couple of your comments on here.


Keras Model seems to be running on a CPU or on one GPU only, there is no way of controlling which GPU is to be used and to switch to another at any point in processing


there is indeed no way to choose a GPU as of yet but in our test all the available gpu's were being used. are you sure you could only use one?


There is no way to utilise models saved as HDF5 checkpoints, I could not even read the HDF5 checkpoint files back into RM anyway


the Python Scripting Extension, which the Keras extension relies on, currently doesn't support serialising to HDF5, so this will have to wait.


Also not sure if Keras Model could accept anything but a data frame, if so it is curious why the model requires the input shape (the shape for a data frame is always standard)


indeed, the superoperator can only handle examplesets. however, different input layers require different input shapes. if you start with a dense layer, then the input shape could be easily deduced. on the other hand convolutional or recurrent layers require specifying an input shape different than the simple number of features. for example, when using a conv1d layer, the input_shape needs to be (batch_size, timesteps, input_dim) and the pre-processing is done automatically by rapidminer. this is shown in the sample processes

Thank you so much @dgrzech for your feedback. I will check the multiple GPUs but this will have to wait as my multi-GPU system is currently out. I understand now the need for input shape, my mind was set on the super operator with the dense layer first. Excellent work, I am very keen on using Keras in RapidMiner as it is greatly simplifying access to this excellent deep architecture.


Looks like we have a very speedy update on Keras 1.0.1, this is a great effort @dgrzech. I have quickly tested the new release and can confirm that lots of previous issues have been fixed very effectively! This time all tests were done in Win 10, same versions of Anaconda, Tensorflow and Keras as before.


  • Now Keras Models take separate training and validation data sets
  • We can set the local random seed for shuffle repeatability
  • Variety of metrics can be defined for both classification and regression
  • Keras history is now produced for charting the training performance
  • So far my tests show that Keras Model correctly assigns label types
  • All callbacks work great (e.g. TensorBoard, ModelCheckpoint, EarlyStopping)
  • ModelCheckpoint works just fine (not sure how to read it back as yet)

This was terrific! A couple of new issues that you could possibly untangle for me:

  • Is it possible to call your own Python code in callbacks or optimizers - it seems, no problem here. However, I am not sure how to sneak the code in (would Keras communicate with Python extension somehow? at the moment it seems like a new process is launched)
  • Also at this point in time, I am not sure how to pass multi-label examples for training or create and test auto-encoders - the old RM trick of looping through the labels makes no sense for deep learning models


RM Staff RM Staff
RM Staff

again thank you for your feedback @jacobcybulski! let me answer your questions.


Is it possible to call your own Python code in callbacks or optimizers


not yet, but it should be made possible in the next update of the extension


I am not sure how to pass multi-label examples


as you correctly noticed rapidminer doesn't currently support multiple label columns so this isn't possible for now but we're working on it

Contributor I Montse
Contributor I

I have had some problems installing the Keras extension into RapidMiner that I have already solved.
Following Anaconda Windows steps, after installing the Python&Keras Extension, I have had a problem trying to test Keras into RapidMiner Studio Settings(Menu) >Preferences >Keras tab with the following messages:
.Graphviz not installed
.Pydot not installed


To solve that:

1. Go to Anaconda prompt

2. Activate the last created environment by typing in the command line: activate Python35

3. Install pip by typing in the command line: conda install pip

4. Install graphviz by typing in the command line: pip install graphviz

5. Install pydot by typing in the command line: pip install pydot

6. Inside your Studio Client go to Settings (Menu) >Preferences and navigate to “Keras” tab/page on the left. Provide path to Python executable (the path to your new conda environment Python executable) and click test to ensure it is successful.


Good luck!


Thanks to all for the very helpful information in the posts above.  I'm a Newbie to using the Keras extension and Tensorboard, but I was able to get through the setup and visualize the Callback (loss) in the "Boston Housing Prices" sample RM process.


Can anyone please point me in the right direction as to visualizing other metrics (accuracy, etc.) in Tensorboard?  I have experimented a bit, but it appears I have a bit to learn about possible syntax variations for the "callbacks" parameter of the Keras operator that relate specifically to Tensorboard.  I am assuming that I need to adapt the syntax from the provided use of calling Tensorboard from the "callback" parameter of the Keras operator.


For example, I have used the following callback for the "Boston Housing Prices" (with 1024 epochs) sample process:


TensorBoard(log_dir="c:\TensorboardLogDir", histogram_freq=256, write_graph=True, write_images=True, embeddings_freq=256, embeddings_layer_names=None, embeddings_metadata=None)


Upon looking at the callback output in Tensorboard, there are no Histograms.  As a syntax reference, I looked at https://keras.io/callbacks/#tensorboard.   When I added the write_grads=True arguement to the callabck, this threw an error in RM Studio. 



Any guidance much appreciated.   Best wishes, Michael

Contributor I Montse
Contributor I

Hi Michael,


Do you know which version of Keras you have? Maybe 2.0.4 (if you have followed this steps)

You need to upgrade Keras to the latest release (2.0.5 or higher). Previous versions do not support the write_grads argument.


To upgrade Keras:

0. Go to Anaconda prompt

1. Make a clone of your environment Python35 (for security): conda create --name Python35_Old --clone Python35

2. Take care you have cloned this new environment for security: conda info --envs

2. Activate Python35 environment: activate Python35

3. Uninstall Keras: conda uninstall keras

4. Install new version of Keras: conda install -c conda-forge keras 


And now you can use the he write_grads argument as you can.


Thanks very much for your advice - and yes, I have 2.0.4.  Will follow the steps you suggested.  


Do you know of any resource(s) that explain the various syntatical possibilities for the "callbacks" parameter of the "Keras Model" operator - as well as how one could visualise metrics other than loss?   Best wishes, Michael




After installing Keras 2.0.6, I get the following error messages from a callback (Keras operator):

TensorBoard(log_dir="c:\TensorboardLogDir", histogram_freq=32, write_graph=True,  write_images=True, embeddings_freq=32, embeddings_layer_names=None, embeddings_metadata=None)

This callback didn't throw an error using Keras 2.0.4.   There is a reference to saving a script - does this mean I need to include a complete path to a file that would contain the script (i.e. output)?  Any suggestions?    Thanks for considering this if possible and best wishes, Michael



Contributor I Montse
Contributor I

Do you have Python & Keras extensions into RapidMiner in the same path that you have installed this new version of Keras?


Yes - within RapidMiner Preferences for Keras and Python Scripting, the path points to the Python Environment with Keras  


As per your suggestion, I cloned the original environment (with Keras  I then uninstalled 2.0.4 from that environment and installed 2.0.06.  Interestingly enough, despite the error, the process generates an output file to my log directory - but only Graph Images, no scalr values.  Best wishes, Michael


Realized I should have tried the callback referenced above using Keras 2.0.4 and python.exe from within the cloned environment.  The callback works as it did before, outputting scalar values and a graph image.  Would like to measure accuracy as well as loss, but am not sure how to configure the callback to output both metrics.  Perhaps the callback syntax is different with Keras 2.0.6?    Best wishes, Michael

Contributor I Montse
Contributor I

I'm sorry but I don't know which is the problem in your case...

I've upgraded Keras (2.0.6) and I've added into callback  write_grads=True. The sintax is the same:

TensorBoard(log_dir='./logs', histogram_freq=0, write_graph=True, write_images=False, embeddings_freq=0, embeddings_layer_names=None, embeddings_metadata=None,write_grads=True )

...and it works fine.

Maybe Keras has not been installed fine?

You can try to list all the packages installed in this environment with: conda list.

Keras has to be supported with Python 3.5



 Keras package.png




Thanks for your message. 


I had no problems installing 2.0.6 and the condo list command showed that 2.0.6 installed on my system without any problems.


I used your callback:


TensorBoard(log_dir="C:\TensorboardLogDir", histogram_freq=0, write_graph=True, write_images=False, embeddings_freq=0, embeddings_layer_names=None, embeddings_metadata=None,write_grads=True )


and it works fine on my system under 2.0.6 as well.  I think my issue occured becuase my callback was slightly different:


TensorBoard(log_dir="C:\TensorboardLogDir", histogram_freq=32, write_graph=True, write_grads=True,  write_images=False, embeddings_freq=32, embeddings_layer_names=None, embeddings_metadata=None)


My callback uses the the value of 32 for the "histogram_reg" and "embedding_freq" parameters as per the guidance from https://keras.io/callbacks/#tensorboard (see below), and the "write_grads" parameter comes earlier in my callback than it does in yours.  You have "write_grads" as the last item in your callback.


Guidance from keras.io is below:



I was hoping to see gradiant histograms in my Tensorboard visualisation which is why I set "histogram_freq" and "embeddings_freq" to 32 as per the screen shot above - to see gradiant histograms, "histogram_freq" has to be greater than 0.  I think that setting those parameters to 32 caused the issue I wrote about.  


Using your callback, I am able to see scalar values and graphs, but no histograms, which I could also see before I started exerimenting with setting "histogram_freq" and "embeddings_freq" to non zero values and adding "write_grads" to me callback.  


Perhaps this is because I am using Python 3.6.1, which is the version of Python that Anaconda for 64 bit Windows currently installs.


I would also like to viaulize the accuracy ("acc") metric - any idea how I can visualize a metric other than "loss"?   Thanks for reading this and thanks for any further suggestions - and best wishes, Michael


Wanted you to know that this callback works fine:


TensorBoard(log_dir="C:\TensorboardLogDir", histogram_freq=32, write_graph=True, write_grads=True, write_images=True, embeddings_freq=0, embeddings_layer_names=None, embeddings_metadata=None)


but no Histograms.  Setting the "embeddings_freq" parameter to 0 prevents any RM error messages, but might be preventing the creation of the gradiant histograms.    Best wishes, Michael

@Montse, In my case installing Keras for RapidMiner on Win10 was a bit more tricky, and the main issue are pydot and graphviz. When you install them both from Anaconda, RapidMiner will see them but the necessary software is still not on the system and will later fail. Here is my advice, assuming you have the current version of Anaconda, Tensorflow and Keras.


Installing Keras 

Make sure you install Keras in the previously defined Tensorflow environment, in most cases this was Python 3.5 however recently I have tried Python 3.6 with success. Then activate the environment.

  • From the command line execute: activate tensorflow 

Then install a number of packages, I.e. (the last two for visualisation only) 

  • cuDNN (if using GPU, see above) 
  • conda install HDF5 
  • conda install h5py (the previous may be downgraded at this point in time) 
  • conda install graphviz (you may be lucky) 
  • pip install pydot (you may be lucky) 

Now you should be able to finish installing Keras.

  • pip install keras 

You can now try running RapidMiner, install Keras plugin and configure it by pointing to the python within "tensorflow" environent. Most likely RapidMiner will be happy with the installation but may not be able to display the graphs or it will fail with an error later on.


If you were not lucky then graphviz and / or pydot need to be properly installed, try these: 

  • Download "graphviz-2.38.msi" from  
  • Execute the "graphviz-2.38.msi" file 
  • Add the graphviz bin folder to the PATH system environment variable  
    (Example: "C:\Graphviz2.38\bin") 
  • You may need one more step, I.e. install a python-graphviz package: 
    conda install python-graphviz 

You should be able to see the graphs prodused by Keras within RapidMiner.


However, I have found on a few systems this was not enough and I had to do the following (I have no idea why but I have found this remedy).

  • Go to Anaconda Prompt using start menu (Make sure to right click and select "Run as Administrator". We may get permission issues if Prompt as not opened as Administrator) 
  • Execute the command: conda install graphviz 
  • Execute the command: pip install git+https://github.com/nlhepler/pydot.git (You will need GIT for this)

In all cases, you may wish to check if things worked out.

  • Execute the command "conda list" and make sure pydot and graphviz modules are listed. 

Good luck -- Jacob


Hi Jacob:


Many thanks for your detailed post with many helpful points to try.  


I did the same thing that you did re: Graphviz - I downloaded the .msi, installed it, set the path statement, and was able to add it to my Python environment.  I should also add that I installed the CPU version of Tensorflow, not the GPU version, as the GPU version has many dependencies.  Does your setup use CPU or GPU?


To be clear, I can currently see Scalar values and Graphs in Tensorflow.  What I can't see are Histograms and embeddings.  I have very little idea as to how to confogure a callback to generate histograms or embeddings.  Perhaps I cannot see histograms or embeddings with a CPU install, perhaps I need GPU.  What do you think?


Also, the only metric I can see scalar values for is "loss".  I would also like to see other metrics, such as accuracy. Do you have any suggestions as to how the callback should be changed in order to show accuracy (or any other metrics)?


To make sure that I understand your suggestions correctly:


I should set up a Tensorflow environment

I should activate that tensorflow environment

then Install Tensorflow  (perhaps also the Microsoft CNTK and Theano)

then Install HDF5 

then Install h5py 

then Install graphviz 

then Install pydot

then Install keras (2.0.6)

then Install python-graphviz 


Thanks for confirming that I have understood you corretly, and I hope I will get a chance to return your kindess some day.


Best wishes, Michael ;-)


Learner III potto
Learner III
After trying all the suggested steps to install Keras on a Windows 10 computer, I am giving up. The documentation to install the RapidMiner extension is very poor and the suggestions on how to fix the numerous errors you may experience during the installation are scattered in various forum. I have used RapidMiner over the last few years in the courses I teach and for research but conclude that the Keras installation is not ready for primetime. If someone from RapidMiner is reading this posting, please provide concise instruction on how to get the Keras extension to work properly. Thanks!

Hi @potto, when you say this plugin is hard to install, you are not wrong.


Having said this, Keras extension is not a self-enclosed package but an interface to a large deep learning stack. To make it work in Win10 with GPUs you need to install: NVIDIA CUDA development kit, NVIDIA cuDNN, Anaconda (Python with Anaconda environments), Tensorflow (with specific libraries, some with CONDA and some in PIP), Keras (with quite a few prerequisite libraries), and some additional Win10 software such as Graphviz. However, if you do succeed in its installation, it frees you from writing 100s lines of Python to develop even the simplest deep learning application. Instead, you can focus on high-level modelling tasks, quickly pre-process your data and later report your results. I see Keras plugin as an opportunity for business developers and researchers to dive into deep learning without the (major) pain of becoming programmers first.


At this stage, unless you want to make a major investment in an enterprise infrastructure, Keras is the only tool in the RapidMiner world to turn a (relatively) cheap gaming computer into a deep learning on GPUs machine using the mainstream software (H2O, Tensorflow and Keras). The last sentence, possibly could be generalised somewhat, as this is not yet possible in the majority of other workflow analytics software, e.g. SAS Enterprise Miner and IBM SPSS Modeler (current RapidMiner cohabitants of the Gartner leadership qudrant). KNIME has implemented its deep learning with GPUs using Deeplearning4j, which is yet to provie itself on the market and it sits on top of Keras for Python integration.


If you want to avoid the hassle of installing the Keras stack, RapidMiner has a nice H2O interface and its deep learning capability, and we are waiting for the GPU support to become available really soon!




P.S. I think at the moment, Keras for the RapidMiner user is hard, however, RapidMiner for the Keras is user is trivial.

Edit: No problems with frequencies in Tensorboard - need to upgrade your Keras to the current version! See the next post by Michael Martin.


@M_Martin, sorry for delay it has been a busy week. I had no luck with histograms and embeddings in Tensorboard callbacks from Keras. In fact, I had no luck of doing this in Python either - worse I managed to crash Python everytime the frequencies for these were non-zero. I have seen a posting to suggest that to get these displayed from Keras you'd need to write your own summary statements within your own implemenation of the fit generator. So we may have to wait for the new version of Keras to deliver this functionality for us and then it'd be available from RapidMiner.


However, when it comes to other metrics to be displayed in your Tensoboard scalars, simply add them in the "metrics" list, which is revealed when you click "use metrics" option of the "Build Keras Sequential Model" (currently Keras plugin restricts you to having only one). Make sure that you have validation data available for this, either by supplying it on input or defining the validation split (otherwise you'd get the Python error).


And yes, your list of steps is correct, I'd add "install Graphviz.exe (and include its bin in the PATH)".






Hi Colleagues:


Have made some real progress re: the questions I have raised so I would like to share what I have come up with in the hope it will be helpful. Many thanks to the people who answered my posts in this KB thread who helped me get started. ;-)


As far as setting up Keras in RapidMiner (and other related Python packages Keras needs), here's what seems to be working for me on several Dev boxes in my shop (caveat: all boxes are running Windows 7 64 bit, with SP1 - all machines have either 16 or 32 GB of RAM and I7 processors).


1. Install the Keras extension from the Rapid Miner Extensions Marketplace into RapidMiner Studio

2. Download and install Anaconda 3 (https://www.anaconda.com/)  Get the appropriate version (64 or 32 bit).

3. Run the Anaconda Navigator (should now be in your Windows Program Group)

4. Create a new environment using the Anaconda Navigator.  The Navigator (as of this writing) will suggest using Python 3.6, but there is also an option for Python 3.5,  Tick on Python 3.5 as I understand that the RapidMiner Keras extension was developed using Python 3.5.  I named my Environment py35.

5.  After Anaconda creates the environment, open up the Anaconda Prompt (should be listed as a shortcut from the Start Menu, or within the Anaconda program group.  Though I didn't have to, you could left click on the icon for the Anaconda prompt and select "Run as Administrator"

6. You neeed to activate the new environment you just created.  From the prompt type: activate <environment name>.  If you named your environment py35, you would type: activate py35.   Then hit Enter/Return.  After a few seconds, the Anaconda prompt will return, and the environment will be active.  The new environment name should now be part of the Anaconda prompt.

7. To see a list of existing Python packages in your new environment, type conda list and then Enter/Return.  You should see a short list of packages.

8. You now need to install a few more Python packages.  Type conda install pandas and then Enter.  After a few seconds, you will be asked to confirm that you want to do the installation.  Type y and the downloading and installation of pandas (and other dependent packages) will begin. When the installation has finished, you'll be returned to the Anaconda prompt for your environment.

9.  Type conda install scikit-learn and then Enter.  Confim that you want to do the install, and then it should start.  When the install is done, you'll be returned to the prompt for your Anaconda environment.

10.  You now need to install a package named Graphviz which requires some extra steps.  Go to http://www.graphviz.org/ and download_windows.php and download graphviz-238.msi.  Then run the msi file you have downloaded to install graphviz (which is a Windows Forms application).

11. Then open the Windows Control Panel, select the System App, and then Advanced System Settings  --> Environment Variables.  Add the path to the Graphviz executable to (at the least) the PATH environment variable for your user account.  The value to append to your existing PATH is C:\Program Files (x86)\Graphviz2.38\bin   Type in a semicolon in front of C:\Program Files (x86)\Graphviz2.38\bin in order to seperate it from the previous entry in your PATH statement.   For good measure (though it may not be strictly neccessary, I also added the following directories to my path statement:


C:\Users\YourUserName\Anaconda3\envs\py35;C:\Users\YourUserNamel\Anaconda3\Scripts;C:\Users\YourUserName\Anaconda3\envs\py35\Lib\site-packages  (remember to type in a semicolon after C:\Program Files (x86)\Graphviz2.38\bin before typing in another entry).


Substitue your Windows User Account name for the YourUserName directly above.


11.  To confirm that your PATH environment variable value has been updated, open a Command Window and type path and then enter.  The value of your PATH environment variable will echo to the screen.  If what you see doesn't include the entries you just added, you'll need to re-boot your system and check again.  

12.  Assuming your PATH has been updated, you can install graphviz (from within the Anaconda prompt for your environment - which should now also show up as a shortcut from the Start Menu or from within the anaconda Program Group) by typing conda install graphviz and then Enter.

13.  Then install the pydot package by typing pip install pydot and then Enter

14.  Last but not least, install Keras (recently updated to version 2.0.6) by typing conda install -c conda-forge keras and then Enter.   After confirming that you want to do the install, Keras and numerous dependent packages will be installed, and you'll be back at the Anaconda prompt for your environment.

15.  If you want to use Tensorboard to visualize your models install the latest version of Tensorflow and Tensorboard by typing

           pip install --ignore-installed --upgrade tensorflow   

and then Enter.  Quite a few packages will be installed, and you'll be back at the Anaconda prompt.   


For info re: Tensorflow and Tensorboard, visit https://www.tensorflow.org


If you type conda list and then Enter, you will see that your environment now contains many more packages.


The last configuration step needs to occur within RapidMiner Studio by selecting Settings -> Preferences and telling RapidMiner Studio where python.exe is within your Python environment.  By default, the complete path should be:




Click on the disk icon to the left of the Test command button and navigate to python.exe within your environment twice - once for the "Keras" option and once for the "Python Scripting" option in the Preferences dialog.  Be sure to click on the "Test" button both times.  If there are no errors, you'll get a message box stating that Python has been detected within Anaconda.  On all my Dev boxes, there were no errors, hopefully there will be no errors on your system.


Some or all of the set up commands above may not work with Windows 10, but Windows 10 does allow you to set compatibility mode to run various programs, so perhaps experimenting with compatibility settings would help.


The installation described above is a CPU installation as opposed to a GPU installation.  GPU installtions will run keras models quicker, but have hardware requirements and the install is tricky.  For more info on this subject visit https://www.google.ca/search?q=keras+gpu+installation&oq=keras+gpu+installation&aqs=chrome..69i57j69...


You should (hopefully) now be able to run the Keras samples provided with RapidMiner which are in the repository under the entry Keras Samples.



If you want to give Tensorboard a spin, you will need to adjust a few default settings of the Keras Model operator in the process.  If you don't want to try Tensorbaord, you can run the process and see the outputs in the RapidMiner Results panel.


I attached some screenshots with some example settings if you want to try Tensorboard.   Start with the screenshot of the process parameters panel below:   


Keras Setup


Select a loss metric from the dropdown opposite the loss parameter (I selected mean_squared_error). 


Tick on the "use metric" label below the decay parameter.  Then then lick on the "Edit enumeration" button opposite the "metric" paramter.  A dialog will open, and if you like, you can select an additional metric to visualize.  Theoretically, you should be able to select additional metrics, but selecting more than one causes (on my systems) the process to crash.  The error message states that the desired metric name (a concatenation of metrics in the enumeration) is invalid, even though the process XML would appear to prevent that type of error message.  After selecting a single metric from the enumeration, (I chose mape) click on OK.


Enter .25 for the "validation split" parameter (just below the verbose paramter) - I think that setting the validation split paramter to a non zero value enables the display of histograms and distributions in Tensorboard.  


Click on the "Edit enumeration" button opposite the callbacks parameter.  Another dialog will open.  Clicking on the dropdown menu will expose several default template callback statements.  The attached screenshot shows three different callback types (RM allows multiple choices for callbacks) that I set.   I don't fully understand how to construct callbacks, especially what you need to specify in order to see embeddings and checkpoints.  You will see that one of the callbacks creates a checkpoint file, but it doesn't display in Tensorboard, and I'm not sure why. I would appreciate any guidance on this point.  The CSV Logger outputs a csv file containing information re: the values of the metrics you selected during the training process.  The screenshot below shows the three callbacks I configured:


Sample Callbacks


As configured in the screenshot, you should be able to see scalar values, images, graphs, distributions, and histograms in Tensorboard.  I think that setting the histogram_freq value to 1 and write_grads=True enables histograms and distributions to display in Tensorflow as long as the validation split paramter (see first screen shot above) is set to a non zero value.  


The next step is to create a directory using Windows Explorer where the callbacks that Tensorboard will read will be written to disk when RapidMiner executes the process.  On my systems, it's C:\TensorboardLogDir


Below is a screenshot of the s&p 500 -regression sample process with a few additions - I added a Performance operator, and I connected the "his" and "exa" ports of the Keras Model operator to process "res" ports.  The his (for history) delivers the same information in the csv file generated by the CSV Logger that is referenced in a callback.  


Process_View.pngProcess Design View


After configuring the process, you can run it - which will take a few moments.  If you go to your Tensorboard log directory, you should see three files assuming you configured the Tensorboard callback as per the screenshot.   There will be an "events out" file, a csv file, and a checkpoint (.ckpt) file.  If you set the "write_images" flag to True in the Tensorboard callback, the file will be quite large - several hundred megabytes.  If you set it to No, it will be around 100 megabytes (that's what the file sizes were on my systems).


Create a sub-directory (a new folder) within your Tensorboard log directory and move the three files to that new folder.


The next step is to start Tensorboard.  From within your Anaconda prompt for your Python environment, type:


tensorboard --logdir="Your Tensorboard Log Directory Name" --host= 


and then Enter.


I needed to put quotes around the log directory name.  On my sytems, I typed:


tensorboard --logdir=C:\"TensorboardLogDir" --host=


After a few seconds, Tensorboard will load and display a message to the effect that you should open a browser and navigate to


After entering that URL in your browser, Tensorboard should load after a few seconds.   On the left hand side of the screen, near the middle, you'll see the word "runs".  You should see a list that has two items, C\ and C\<subdirectory name>.  On my systems, its C\1.  Click on the SCALARS herading at the top of the browser window and and click on C\<Sun-directory Name> below "runs".  Four visualisations of scalar values should appear on your screen.  On my screen, these four values are loss, mean_absolute_percentage_error, val_loss, and _val_mean_absolute_percentage_error.  I'm not sure why I get four metrics as opposed to two, and I don't understand the difference between "loss" and "val_loss" and "mean_absolute_percentage_error" and "val_mean_absolute_percentage_error".  Any guidance on this would be appreciated.  The values for all of the visualisations you will see in Tensorboard come from the "events out" file.


If your callback included images, click on IMAGES at the top of the browser window will show images representing your model.  I think images are most usefull in text mining, but perhaps I may be missing something.  Clicking on GRAPHS shows a Graph realization of the neural network, and clicking on DISTRIBUTIONS and HISTOGRAMS shows metadata related to the training progress (the behaviours or layers in the network) over the 256 epochs the training ran.


There is an option called INACTIVE next to HISTOGRAMS.  If you click on the INACTIVE option and then click on Projector, you'll get an error message stating that a checkpoint file has not been saved, even though I configured a callback to write a checkpoint file.  I'm sure I'm missing something, any suggestions appreciated.


You can then go back to RapidMiner, change the process in some way, and then run the process again.  Three more files will be written to your Tensorboard logging directory.  Create a new subfolder and move the new files into that subfolder.  


Go to Tensorboard in your browser and reload the page.  You will then see both of your sub-folders under "runs" near the middle of lthe left hand side of your browser window. You click single or multiple select these subfolders and the appropriate visualisation(s) will appear.  I haven't found a way to toggle between vizzes except by putting files related to each run in its own sub-directory underneath the main Tensorboard log directory.


I hope the above has been helpful.  There is still quite a bit of material I don't really yet understand (espcially about callbacks), but at the very least, I hope the above will help with getting one through the setup so that one can at least get a feel for what's going on.  There's a very informative webinar about Keras and RapidMiner at https://rapidminer.com/resource/state-deep-learning/.  It's also on YouTube.  There are also many good resources on the web regarding Tensorflow and Tensorboard, one of the best being tensorflow.org.


Best wishes, Michael


Postscript to the previous post:


I desribed how one should modify the PATH environment variable as part of installing the Graphviz package.  I also said that for good measure (though it may not be absolutely required) that one should add the following directories to the PATH environment variable:


C:\Users\YourUserName\Anaconda3\envs\py35;C:\Users\YourUserNamel\Anaconda3\Scripts;C:\Users\YourUserName\Anaconda3\envs\py35\Lib\site-packages  (remember to type in a semicolon after C:\Program Files (x86)\Graphviz2.38\bin before typing in another entry).


The above should substitue your Python environment name for my environment name (py35) in the statement above.  


There was one type pertaining to what histograms and distributions show in Tensorflow.  As far as I can tell these elements describe activity in the various layers of the network.



@M_Martin Excellent summary Michael. I usually install Tensorflow before Keras but it really makes no difference, you can in fact switch the back end of Keras to CNTK or Theano is wished to. Thanks for the pointers on the frequencies, I have upgraded my version of Keras and its RM plugin to the most up-to-date vesrions and bindo no more crashes. Images are very useful when processing images of course as you can see their "compression" in the middle of the neural net. Yet, they are generated for all data.  -- Jacob

After I have successfully installed RapidMiner with Keras extension on a number of machines, both Win10 and Ubuntu 16.04, I thought I'd add to the instructions by @M_Martin and provide a guide for those who wish to install Keras while running RapidMiner on Linux. Here are the Ubuntu 16.04 guidelines.


The following steps are needed to set up Keras on Ubuntu 16.04 LTS

It is important that you install the versions the documentation says to install and not anything "better".

NVIDIA CUDA Toolkit (only needed for GPU support)

  • CUDA 8 web site (at this stage Tensorflow dos not work with CUDA 9):
  • Check that you have a CUDA compatible GPU, I.e.
    See the list on https://developer.nvidia.com/cuda-gpus
  • Download CUDA 8 GA2 x86_64 Deb for Ubuntu 16.04 (1.9Gb) + Patch 2 (128 Mb)
  • Follow instructions to install drivers and toolkit
    If you have a newer driver, install the toolkit manually and skip the driver installation,
    place it in <cudapath> (e.g. /usr/local/, change to yours)
  • Set CUDA variables in your ~/.profile (or ~/.bash_profile), e.g. to:
    export PATH=
    export LD_LIBRARY_PATH=
    export LD_LIBRARY_PATH=
    export CUDA_HOME=<cudapath>/cuda
       (you may need to log out and log in again after editing this file)
  • Then install cuDNN 6, from web site: https://developer.nvidia.com/cudnn
    Note that I have not tested Tensorflow with cuDNN 7 but you could give it a go
    You may need to register for download, place it in <cudnnpath> (change it to yours)
  • Copy the following files into the CUDA Toolkit directory:
    $ cd <cudnnpath>
    $ sudo cp -P include/cudnn.h <cudapath>/cuda/include
    $ sudo cp -P lib64/libcudnn* <cudapath>/cuda/lib64
    $ sudo chmod a+r <cudapath>/cuda/lib64/libcudnn*


  • Web site:
  • Download for Python 3.6+, 64bit, from:
  • Install in easily accessible location, e.g.:
  • Note that in the process of adding Anaconda, you will be prompted to add it to the PATH. If not add it to your ~/.profile:
    (you may need to log out and log in again after editing this file)
  • Open a new command line and test it, I.e.
    $ conda –version

Tensorflow (It now works with Python 3.6+)

  • Web site: https://www.tensorflow.org/
  • Follow instructions for Anaconda, I.e. from command line (recently updated):
    $ conda create -n tensorflow python=3.6 (or lower, e.g. v3.5)
    $ source activate tensorflow (you switch to Tensorflow environment)
  • Install for CPU only:
    (tensorflow) $
    pip install --ignore-installed --upgrade tensorflow
  • Install for GPU only:
    (tensorflow) $ pip install --ignore-installed --upgrade tensorflow-gpu
  • Test your installation by writing a short program, e.g.
    (tensorflow) $ python
    >>> import tensorflow as tf
    >>> hello = tf.constant('Hello, TensorFlow!')
    >>> sess = tf.Session()
    >>> print(sess.run(hello))
    >>> ^D
  • You may also wish to optionally install in that environment (esp. if you decided to use a version of Python different from that in the root):
    (tensorflow) $ conda install numpy
    (tensorflow) $ conda install scipy
    (tensorflow) $ conda install scikit-learn
    (tensorflow) $ conda install pandas
    (tensorflow) $ conda install matplotlib
    (tensorflow) $ conda install jupyter


  • Make sure you install Keras in the previously defined Tensorflow environment, i.e. from the command line execute:
    $ source activate tensorflow
  • Then install a number of packages, I.e.
    Already installed cuDNN (if using GPU, see above)
    (tensorflow) $ conda install HDF5
    (tensorflow) $ conda install h5py (the previous may be downgraded)
    (tensorflow) $ conda install graphviz
    (tensorflow) $ conda install python-graphviz
    (tensorflow) $ pip install pydot
    (tensorflow) $ conda list (make sure pydot and graphviz are listed)
  • If you were not lucky with graphviz and / or pydot, you can leave their installation for later as both are needed only for charting of Keras models. If I ever had any issues with the installation of Keras, these two were the main cause of issues. If all fails, try googling around. You may however try this workaround to install graphviz software separately (Anaconda only provides an interface to the software), i.e.
    $ sudo apt-get update
    $ sudo apt-get install graphviz
  • Finish installation with:
    (tensorflow) $ pip install Keras
  • You are now ready to use Keras in Python

RapidMiner Studio

  • Download and install RapidMiner Studio
  • Install Keras extension from the RapidMiner Marketplace
  • In Settings > Preferences > Keras tab,
    set Path to Python executable to Python within the Tensorflow environment, e.g.
  • You can start modeling with Keras in RapidMiner Studio


Learner III kakkad2
Learner III

Version:1.0 StartHTML:000000301 EndHTML:000034214 StartFragment:000033065 EndFragment:000034011 StartSelection:000033120 EndSelection:000033995 SourceURL:https://community.rapidminer.com/t5/Education-University-Research/Applying-Convolutional-Neural-Netw...) Applying Convolutional Neural Networks (CNN) on th... - RapidMiner

Applying Convolutional Neural Networks (CNN) on the iris dataset


Hi everybody,

After connecting Keras and RapidMiner through Python:


I was trying to use the CNN operator within the keras model for classification. I used the "add core layer operatror" and it worked just fine. But when I use a CNN layer, I get all kinds of errors depending on what dataset/parameters are in use. Please suggest alternative actions if you have an idea on what might the problem be. I would like to simply use the CNN operator as one of the layers inside the keras model operator in my neural network. Thank you!


One of the important last steps I want to highlight, which is in the above documentation but I missed.


Make sure your Python Scripting and Keras paths point to the same Python environment path in Windows. I had them different and kept crashing. 


So if you have a Windows machine and point your Keras to C:\Anaconda3\envs\Python35\python.exe, make sure your Python Scripting path is C:\Anaconda3\envs\Python35\python.exe as well.

I wonder @dgrzech if anything changed in Keras to allow selection of / splitting the work between different GPUs and / or inclusion of own Python code in the Keras process? We are setting a lab full of machines with multiple GPUs, capable of running RapiMiner models by researchers from our business school, i.e. people who are not very technical.


RM Research RM Research
RM Research

@potto sorry for the late reply. I'm sorry to hear, that our instructions aren't detailed enough. Maybe you could try using Microsofts Cognitive Toolkit as a backend for Keras. It's more easy to install on Windows and it makes no difference for you when using our Keras extension, since it's only the way stuff is executed behind the curtain that changes.


  1. Follow the instructions over at Microsofts installation guide to install their Cognitive Toolkit for Python. Make sure to choose the version matching your python version and selecting the one with GPU support, if you want to execute your process on GPUs as well.
  2. Install Keras by running `conda install keras` if you are using Anaconda or `pip install keras` if not.
  3. Run Keras once. E.g. by opening up a command prompt, starting python and running `import keras`.
  4. Now a `keras.json` file should exists in a hidden folder called `.keras` in your users home directory. It might look like this
        "backend": "tensorflow",
        "floatx": "float32",
        "image_data_format": "channels_last",
        "epsilon": 1e-07
  5. Change the 'backend' value in the json file to 'cntk', save the file.
  6. Point the RapidMiner Keras Extension to the Python you are using for CNTK (Cognitive Toolkit).

Hope this helps.




RM Research RM Research
RM Research

Hi @jacobcybulski right now the Keras extension has no capabilities to add own python code to the execution or to change the GPU handling. But maybe the upcoming RapidMiner Server 8.0 release might help configuring your lab to assign given nodes with GPUs to certain queues.


Some information about the new architecture:



Learner III bh1779
Learner III


I have been experiencing problem with "NullPointer Exception" and "Mandatory input missing".

It's the same problem that @asf encountered in https://community.rapidminer.com/t5/RapidMiner-Studio-Forum/Mandatory-input-missing-issue-in-keras-m....

I have already reinstalled ancaconda following the instruction of @jacobcybulski, and the "test" button in keras setting gives success.

I have updated the JAVA_HOME variable and my JRE path.

Any help please?



I found my problem. I added the Keras extension in rapidminer but did not add the Python Scripting Extension. After adding that it worked like a charm.



Hi @bh1779, I do not think Keras extension has a direct dependency on the Python extension (at least not on Linux) - I suspect they share some common code though. I happily uninstalled my Python extension and run my Keras model without any problems. However, I used to get NULL pointer exception in various situations in the version of Keras extension prior to 1.0.3, for example when using "None" metric or when there was no validation split defined with some metrics.

The Missing MNIST Example in Keras for RM


As I was reading @kakkad2 comment on convolutional neural nets in Keras, I have realised that we do not have a working example anywhere to show how to deal with CNN in Keras for RM, especially when the application is in image recognition - the very staple of CNN. So I have quickly produced a CNN RM process (see attached at the end). To make it simpler, I have reproduced the Keras Python example from the Francois Challet github, see:

The problems of replicating the MNIST logic in RM are as follows:

  1. We cannot read image data into RM;
  2. RM wants all data to be in tabular format;
  3. Some of the Keras operator's defaults are not the same as those for Python.

To deal with reading images into RM we could utilize the IMMI extension, however at the moment I have no access to IMMI for RM 7.6. The next best option is to get the image data using Python and export it as a Pandas data frame with extra meta info, which is a valid example set. The second issue can be easily handled by unfolding the four dimensional image data (examples X pixel rows X pixel columns X color channels) into two dimensional tables of RM examples (examples X pixel color channel) - the main problem would be to fold back this representation for convolution to happen, also we need to watch for the internal representation of images as passed into RM through Tensorflow or Theano, which could be channel first or last. Finally, we need to check what defaults are given to us by RM Keras (e.g. for optimizers) vs what defaults are defined for the same functions in Python, or better experiment with different settings.


The Python code to read in data is very simple. Note that to mirror exactly what was done in the github example, the data was read from Keras standard data set "mnist", its shape saved, data transformed into floats, and finally both training and test "x" and "y" vectors have been "unfolded" and merged into Pandas data frames with "y" column defined as the RM "label" (using rm_metadata attribute). The shape information was also returned as a data frame, so that it could later be used to set image sizes and shapes in the convolutional nets.


from __future__ import print_function
import os
import numpy as np
import pandas as pd

import keras
from keras.datasets import mnist
from keras import backend as K

### Loads and returns MNIST data set in Pandas format
def rm_main():
    # input image dimensions
    img_rows, img_cols, ch_no = 28, 28, 1
    num_classes = 10
    # the data, shuffled and split between train and test sets
    (x_train, y_train), (x_test, y_test) = mnist.load_data()
    if K.image_data_format() == 'channels_first':
        input_shape = (ch_no, img_rows, img_cols)
        input_shape = (img_rows, img_cols, ch_no)
    x_train = x_train.reshape(x_train.shape[0], img_rows*img_cols*ch_no).astype('float32')
    x_test = x_test.reshape(x_test.shape[0], img_rows*img_cols*ch_no).astype('float32')
    x_train /= 255
    x_test /= 255
    # convert image vectors to data frames
    df_train = pd.concat([pd.DataFrame(data={'y': y_train}), pd.DataFrame.from_dict(x_train)], axis=1)
    setattr(df_train, "rm_metadata", {})
    df_train.rm_metadata['y'] = ("nominal","label")
    df_test = pd.concat([pd.DataFrame(data={'y': y_test}), pd.DataFrame.from_dict(x_test)], axis=1)
    setattr(df_test, "rm_metadata", {})
    df_test.rm_metadata['y'] = ("nominal","label")

    # Prepare shape info
    shape_data = np.array([['', 'rows', 'cols', 'ch', 'shape'],
                          ['', img_rows, img_cols, ch_no, str(input_shape)]])
    shape_result = pd.DataFrame(data=shape_data[1:,1:], 
    setattr(shape_result, "rm_metadata", {})
    shape_result.rm_metadata['rows'] = ("integer",None)
    shape_result.rm_metadata['cols'] = ("integer",None)
    shape_result.rm_metadata['ch'] = ("integer",None)
    shape_result.rm_metadata['shape'] = ("text",None)
    # Return results
    return df_train, df_test, shape_result

The RM process is very simple - it reads images data in, passes the training and validation data sets to Keras sequence and then checks the model performance. The architecture of the Keras network is slightly different from that in the github example, as we need to include an extra initial step to fold the data back into its original form based on the size and shape info passed into RM. So the model looks like this:

MNIST RM Model Architecture.png

All other network elements are identical to those in the Python code. When the process runs it gives an almost exact performance as the same code in Python, which could be plotted in RM or watched in real-time using Tensorflow, e.g.

Tensorflow Val Accuracy and Loss.png

Enjoy convolutional neural nets processing images in RapidMiner.




The whole process you can find in the following RMP.


<?xml version="1.0" encoding="UTF-8"?><process version="7.6.001">
  <operator activated="true" class="process" compatibility="7.6.001" expanded="true" name="Process">
    <process expanded="true">
      <operator activated="true" class="python_scripting:execute_python" compatibility="7.4.000" expanded="true" height="124" name="Execute Python" width="90" x="45" y="187">
        <parameter key="script" value="from __future__ import print_function&#10;import os&#10;import numpy as np&#10;import pandas as pd&#10;&#10;import keras&#10;from keras.datasets import mnist&#10;from keras import backend as K&#10;&#10;&#10;### Loads and returns MNIST data set in Pandas format&#10;def rm_main():&#10;    &#10;    # input image dimensions&#10;    img_rows, img_cols, ch_no = 28, 28, 1&#10;    num_classes = 10&#10;    &#10;    # the data, shuffled and split between train and test sets&#10;    (x_train, y_train), (x_test, y_test) = mnist.load_data()&#10;    &#10;    if K.image_data_format() == 'channels_first':&#10;        input_shape = (ch_no, img_rows, img_cols)&#10;    else:&#10;        input_shape = (img_rows, img_cols, ch_no)&#10;    &#10;    x_train = x_train.reshape(x_train.shape[0], img_rows*img_cols*ch_no).astype('float32')&#10;    x_test = x_test.reshape(x_test.shape[0], img_rows*img_cols*ch_no).astype('float32')&#10;    x_train /= 255&#10;    x_test /= 255&#10;    &#10;    # convert image vectors to data frames&#10;    df_train = pd.concat([pd.DataFrame(data={'y': y_train}), pd.DataFrame.from_dict(x_train)], axis=1)&#10;    setattr(df_train, &quot;rm_metadata&quot;, {})&#10;    df_train.rm_metadata['y'] = (&quot;nominal&quot;,&quot;label&quot;)&#10;    df_test = pd.concat([pd.DataFrame(data={'y': y_test}), pd.DataFrame.from_dict(x_test)], axis=1)&#10;    setattr(df_test, &quot;rm_metadata&quot;, {})&#10;    df_test.rm_metadata['y'] = (&quot;nominal&quot;,&quot;label&quot;)&#10;&#10;    # Prepare shape info&#10;    shape_data = np.array([['', 'rows', 'cols', 'ch', 'shape'],&#10;                          ['', img_rows, img_cols, ch_no, str(input_shape)]])&#10;    &#10;    shape_result = pd.DataFrame(data=shape_data[1:,1:], &#10;                      index=shape_data[1:,0], &#10;                      columns=shape_data[0,1:])&#10;    setattr(shape_result, &quot;rm_metadata&quot;, {})&#10;    shape_result.rm_metadata['rows'] = (&quot;integer&quot;,None)&#10;    shape_result.rm_metadata['cols'] = (&quot;integer&quot;,None)&#10;    shape_result.rm_metadata['ch'] = (&quot;integer&quot;,None)&#10;    shape_result.rm_metadata['shape'] = (&quot;text&quot;,None)&#10;    &#10;    # Return results&#10;    return df_train, df_test, shape_result&#10;    "/>
      <operator activated="true" class="extract_macro" compatibility="7.6.001" expanded="true" height="68" name="Extract Macro" width="90" x="246" y="391">
        <parameter key="macro" value="img_shape"/>
        <parameter key="macro_type" value="data_value"/>
        <parameter key="attribute_name" value="shape"/>
        <parameter key="example_index" value="1"/>
        <list key="additional_macros">
          <parameter key="img_rows" value="rows"/>
          <parameter key="img_cols" value="cols"/>
          <parameter key="img_channels" value="ch"/>
      <operator activated="true" class="generate_macro" compatibility="7.6.001" expanded="true" height="82" name="Generate Macro" width="90" x="380" y="391">
        <list key="function_descriptions">
          <parameter key="img_size" value="eval(%{img_rows})*eval(%{img_cols})*eval(%{img_channels})"/>
      <operator activated="true" class="multiply" compatibility="7.6.001" expanded="true" height="103" name="Multiply" width="90" x="246" y="238"/>
      <operator activated="true" class="keras:sequential" compatibility="1.0.003" expanded="true" height="166" name="Keras Model" width="90" x="380" y="34">
        <parameter key="input shape" value="(%{img_size},)"/>
        <parameter key="loss" value="categorical_crossentropy"/>
        <parameter key="optimizer" value="Adadelta"/>
        <parameter key="learning rate" value="1.0"/>
        <parameter key="rho" value="0.95"/>
        <parameter key="use metric" value="true"/>
        <enumeration key="metric">
          <parameter key="metric" value="categorical_accuracy"/>
        <parameter key="epochs" value="12"/>
        <parameter key="batch size" value="128"/>
        <enumeration key="callbacks">
          <parameter key="callbacks" value="TensorBoard(log_dir='/tmp/keras_logs/MNIST_RM', histogram_freq=0, write_graph=True, write_images=True, embeddings_freq=0, embeddings_layer_names=None, embeddings_metadata=None)"/>
        <parameter key="verbose" value="2"/>
        <parameter key="shuffle" value="true"/>
        <process expanded="true">
          <operator activated="true" class="keras:core_layer" compatibility="1.0.003" expanded="true" height="82" name="Add Reshape" width="90" x="45" y="34">
            <parameter key="layer_type" value="Reshape"/>
            <parameter key="target_shape" value="%{img_shape}"/>
            <parameter key="dims" value="1.1"/>
          <operator activated="true" class="keras:conv_layer" compatibility="1.0.003" expanded="true" height="82" name="Add Conv2D 1" width="90" x="179" y="34">
            <parameter key="layer_type" value="Conv2D"/>
            <parameter key="filters" value="32"/>
            <parameter key="kernel_size_2d" value="3.3"/>
            <parameter key="kernel_size_3d" value="1.1.1"/>
            <parameter key="strides_2d" value="1.1"/>
            <parameter key="strides_3d" value="1.1.1"/>
            <parameter key="padding_1d" value="1.1"/>
            <parameter key="cropping_1d" value="1.1"/>
            <parameter key="size_2d" value="2.2"/>
            <parameter key="size_3d" value="2.2.2"/>
            <parameter key="dilation_rate_2d" value="1.1"/>
            <parameter key="dilation_rate_3d" value="1.1.1"/>
            <parameter key="activation_function" value="'relu'"/>
          <operator activated="true" class="keras:conv_layer" compatibility="1.0.003" expanded="true" height="82" name="Add Conv2D 2" width="90" x="313" y="34">
            <parameter key="layer_type" value="Conv2D"/>
            <parameter key="filters" value="64"/>
            <parameter key="kernel_size_2d" value="3.3"/>
            <parameter key="kernel_size_3d" value="1.1.1"/>
            <parameter key="strides_2d" value="1.1"/>
            <parameter key="strides_3d" value="1.1.1"/>
            <parameter key="padding_1d" value="1.1"/>
            <parameter key="cropping_1d" value="1.1"/>
            <parameter key="size_2d" value="2.2"/>
            <parameter key="size_3d" value="2.2.2"/>
            <parameter key="dilation_rate_2d" value="1.1"/>
            <parameter key="dilation_rate_3d" value="1.1.1"/>
            <parameter key="activation_function" value="'relu'"/>
          <operator activated="true" class="keras:pooling_layer" compatibility="1.0.003" expanded="true" height="82" name="Add Pooling Layer" width="90" x="447" y="34">
            <parameter key="layer_type" value="MaxPooling2D"/>
            <parameter key="pool_size_2d" value="2.2"/>
            <parameter key="pool_size_3d" value="2.2.2"/>
            <parameter key="strides_2d" value="2.2"/>
            <parameter key="strides_3d" value="2.2.2"/>
          <operator activated="true" class="keras:core_layer" compatibility="1.0.003" expanded="true" height="82" name="Add Dropout 1" width="90" x="581" y="34">
            <parameter key="layer_type" value="Dropout"/>
            <parameter key="rate" value="0.25"/>
            <parameter key="dims" value="1.1"/>
          <operator activated="true" class="keras:core_layer" compatibility="1.0.003" expanded="true" height="82" name="Add Flatten" width="90" x="179" y="238">
            <parameter key="layer_type" value="Flatten"/>
            <parameter key="dims" value="1.1"/>
          <operator activated="true" class="keras:core_layer" compatibility="1.0.003" expanded="true" height="82" name="Add Dense 1" width="90" x="313" y="238">
            <parameter key="no_units" value="128"/>
            <parameter key="activation_function" value="'relu'"/>
            <parameter key="dims" value="1.1"/>
          <operator activated="true" class="keras:core_layer" compatibility="1.0.003" expanded="true" height="82" name="Add Dropout 2" width="90" x="447" y="238">
            <parameter key="layer_type" value="Dropout"/>
            <parameter key="rate" value="0.5"/>
            <parameter key="dims" value="1.1"/>
          <operator activated="true" class="keras:core_layer" compatibility="1.0.003" expanded="true" height="82" name="Add Dense Softmax" width="90" x="581" y="238">
            <parameter key="no_units" value="10"/>
            <parameter key="activation_function" value="'softmax'"/>
            <parameter key="dims" value="1.1"/>
          <connect from_op="Add Reshape" from_port="layers 1" to_op="Add Conv2D 1" to_port="layers"/>
          <connect from_op="Add Conv2D 1" from_port="layers 1" to_op="Add Conv2D 2" to_port="layers"/>
          <connect from_op="Add Conv2D 2" from_port="layers 1" to_op="Add Pooling Layer" to_port="layers"/>
          <connect from_op="Add Pooling Layer" from_port="layers 1" to_op="Add Dropout 1" to_port="layers"/>
          <connect from_op="Add Dropout 1" from_port="layers 1" to_op="Add Flatten" to_port="layers"/>
          <connect from_op="Add Flatten" from_port="layers 1" to_op="Add Dense 1" to_port="layers"/>
          <connect from_op="Add Dense 1" from_port="layers 1" to_op="Add Dropout 2" to_port="layers"/>
          <connect from_op="Add Dropout 2" from_port="layers 1" to_op="Add Dense Softmax" to_port="layers"/>
          <connect from_op="Add Dense Softmax" from_port="layers 1" to_port="layers 1"/>
          <portSpacing port="sink_layers 1" spacing="0"/>
          <portSpacing port="sink_layers 2" spacing="0"/>
      <operator activated="true" class="generate_id" compatibility="7.6.001" expanded="true" height="82" name="Generate ID" width="90" x="581" y="34"/>
      <operator activated="true" class="keras:apply" compatibility="1.0.003" expanded="true" height="82" name="Apply Keras Model" width="90" x="581" y="238">
        <parameter key="batch_size" value="8"/>
        <parameter key="verbose" value="1"/>
      <operator activated="true" class="performance_classification" compatibility="7.6.001" expanded="true" height="82" name="Performance" width="90" x="715" y="136">
        <parameter key="kappa" value="true"/>
        <parameter key="correlation" value="true"/>
        <list key="class_weights"/>
      <connect from_op="Execute Python" from_port="output 1" to_op="Keras Model" to_port="training set"/>
      <connect from_op="Execute Python" from_port="output 2" to_op="Multiply" to_port="input"/>
      <connect from_op="Execute Python" from_port="output 3" to_op="Extract Macro" to_port="example set"/>
      <connect from_op="Extract Macro" from_port="example set" to_op="Generate Macro" to_port="through 1"/>
      <connect from_op="Multiply" from_port="output 1" to_op="Keras Model" to_port="validation set"/>
      <connect from_op="Multiply" from_port="output 2" to_op="Apply Keras Model" to_port="unlabelled data"/>
      <connect from_op="Keras Model" from_port="model" to_op="Apply Keras Model" to_port="model"/>
      <connect from_op="Keras Model" from_port="history" to_op="Generate ID" to_port="example set input"/>
      <connect from_op="Generate ID" from_port="example set output" to_port="result 1"/>
      <connect from_op="Apply Keras Model" from_port="labelled data" to_op="Performance" to_port="labelled data"/>
      <connect from_op="Apply Keras Model" from_port="model" to_port="result 4"/>
      <connect from_op="Performance" from_port="performance" to_port="result 2"/>
      <connect from_op="Performance" from_port="example set" to_port="result 3"/>
      <portSpacing port="source_input 1" spacing="0"/>
      <portSpacing port="sink_result 1" spacing="0"/>
      <portSpacing port="sink_result 2" spacing="21"/>
      <portSpacing port="sink_result 3" spacing="189"/>
      <portSpacing port="sink_result 4" spacing="126"/>
      <portSpacing port="sink_result 5" spacing="0"/>



Hi Jacob:  Thanks so much not only for your interesting and helpful post above, but also for all the time and thought you have put into the many posts your have authored on this large Theme! ;-)   Curiosity brings many rewards, just not always the rewards one might expect, Best wishes, Michael ;-)

@M_Martin, Thanks Michael, I imagine everybody who is going to the deep (learning) end of the RM pool, will sooner or later confront those issue and possibly find a helpful hand in the form of ready made solutions online. As Keras is a new plugin and RM is not like Python or R, I posted the mini-tute here. However, as you said this theme grew a bit long, so perhaps in the future I'd post some examples in the "Data Science" corner? As we now have at least two deep learning products within the RM stratosphere, Keras and H2O (and DeepLearning4J was there for a few minutes), may be it would be worthwhile setting up a discussion area devoted to "Deep Learning" within "Data Science"?




P.S. I am an academic in the Business School where there is a huge interest in Deep Learning and Text Analytics (or Advanced Analytics in general) from researchers in Marketing, Management, Finance and Accounting, and so we are looking at RapidMiner as the best analytics vehicle to support people who could solve their research problems on a high conceptual level rather than turning to coders for help.

RM Certified Analyst
RM Certified Analyst

 I ran into the same issue that someone reported in another thread: https://community.rapidminer.com/t5/RapidMiner-Studio-Forum/keras-issue/td-p/42267 while running Keras example:


Oct 30, 2017 12:36:48 PM INFO: Traceback (most recent call last):
Oct 30, 2017 12:36:48 PM INFO: File "script", line 301, in rm_main
Oct 30, 2017 12:36:48 PM INFO: model_weights_to_csv(model, '/Users/igor.elbert/weights/')
Oct 30, 2017 12:36:48 PM INFO: File "script", line 226, in model_weights_to_csv
Oct 30, 2017 12:36:48 PM INFO: if not os.path.exists(path + layer_names[math.floor(i / 2)] + '.csv'):
Oct 30, 2017 12:36:48 PM INFO:
Oct 30, 2017 12:36:48 PM INFO: TypeError: list indices must be integers, not float (script, line 226)


Any advice or resolution?


MacOS, Python 2.7

Contributor II gbortz27
Contributor II

Which directory must I run the instructions from as I run these and it cant interpret the commands ie ät the command prompt"...it wont run from the C: ...so is it in one of the anaconda subdiorectories directories.

I am running windows 10 

i installed anaconda but with python 3.6...must i re-install with 3.5. 


is every command run from the same command line and directory , if so what is it 

@ielbert, I am not a Mac expert, however, while Tensorflow allows to be installed on Python 2.7, I ran Keras only with Anaconda and Python 3.5 and 3.6. I am a latecomer when it comes to Python so my CNN example above used only Python 3. I assume the error happens when the model is written to a file, possibly during a callback? See if it works when you ake the callbacks out.



@gbortz27, when you install Tensorflow and Keras after you have installed Anaconda (Python 3.6 is fine), on Win10 the easiest way is to start Python from "Anaconda Prompt", which you can find in your Start menus. Anaconda Prompt will find its Python regardless of where it is. However, after you create a Tensorflow environment remember to use the "activate" statement as the environment will use its own Python (which may also be a different version to what your Anaconda). Again after the activation the correct Python will run from the command line. Type it all in there regardless of which directory you are in. The only directory-specific instructions are for NVIDIA and CUDA toolkit if you are running with GPUs.


RM Certified Analyst
RM Certified Analyst

@jacobcybulski, removing callback gives me NullPointerError.

@ielbert, sorry I've missed it before - math.floor returns float not int as you'd expect, try changing your call to:

if not os.path.exists(path + layer_names[int(math.floor(i / 2))] + '.csv')


Contributor II hermawan_eriadi
Contributor II

I got error message..

ValueError: could not convert string to float: 'GoldCr' (script, line 300)
TypeError: 'NoneType' object is not callable

I think the script line 300 is in python line, not in XML lines. The 'GoldCr' is value of an attribute (polynomial).

What is the problem?



Learner III 56005393_t30l41
Learner III

After you train a Model.ckpt  in Keras  How do you later restore



RM Research RM Research
RM Research

@gbortz27 you don't necessarily need to use the anaconda prompt to first activate an environment. If you installed tensorflow and keras into an environment instead of systemwide you can directly use the path to the environment as the python path in the settings of the Keras Extension inside RapidMiner. For that look for the 'env' folder inside your anaconda folder. It should contain a folder named after the name of the environment.


@hermawan_eriadi make sure to convert all your nominal attributes into numerical first. Your error sounds like there are still strings in your data. For example you can use the "Nominal to Numerical" Operator.


@56005393_t30l41 if you want to load a model created with Keras within RapidMiner, you can use the "Store" and "Retrieve" Operators to save and load the model. If you want to load a model created in Keras within Python you can use the "Python Scripting" Operator to load and apply the model to your data.



Using Python 3.6.x should be fine.


Hope this helps,


Contributor II hermawan_eriadi
Contributor II
Thanks @pschlunder .. It is mean that Keras can't handle nominal attributes ? It will increase my attributes many more.. do we still need kind of Features Selection to find out the best attributes or it will automaticly selected by layers in Keras ? Thanks..
RM Research RM Research
RM Research

Yes, that's i correct. Neural networks only work on numerical data. If you find algorithms also dealing with nominal attributes, they're doing some transformation under the hood.


In general I'd start by training your estimator on all attribute (of course removing obvious useless/id-like attributes first) and check for the models performance in terms of a certain score and computing resources. Then you can decide wether it is necessary to apply a feature seleciton first or not.


Please keep in mind, that you need to validate your feature selection as well.