🎉 🎉 RAPIDMINER 9.10 IS OUT!!! 🎉🎉
Download the latest version helping analytics teams accelerate time-to-value for streaming and IIOT use cases.
Anomaly detection using Deep Learning Operator in Rapid Miner
I have a 10000 example set with highly imbalanced class i.e. Success_Flag = 0 (9800 records) & Success_Flag = 1 (200 records). I tried the One Class SVM from the SVM (Libsvm) operator with cross validations and optimisation grid to identify the anomalies and the result is ok. I was hoping to do the same using the Deep Learning Operator but unfortunately, it does not take a One Class classification problem. I then thought of "tricking" the Deep Learning Operator by,
1. Separate the example set into 2 sets based on their classes i.e. 9800 class 0 and 200 class 1.
2. For the 9800 class 0, I partitioned into 7840 observations (80%) and 1960 observations (20%)
3. From the 200 class 1, I random sample with replacement for 1 record.
4. I then append this 1 record (class 1) to the 7840 (80%) = 7841 observations for training (now there are 2 classes)
5. I then append the 1960 (20%) to the 200 (class 1) = 2160 observations for testing (now also there are 2 classes).
6. I train a Deep Learning classifier with 10 fold CV using the 7481 training observations within an optimisation grid.
7. The optimised hyperparameters for the deep learning classifier is then tested on the 2160 test set.
The test results looked to be much better than the One Class SVM results. Hence my questions are,
A. Is my "trick" correct from a data mining best practice perspective?
B. Are there any potential problems with what I did?
C. If my "trick" is incorrect, what are your recommendations for doing anomaly detection with deep learning without resorting integrating R or Python scripts?