It looks like you're new here. Sign in or register to get started.
How does Auto Model handle unequal misclassification costs ? I don't see an option to specify relation between FP and FN or enter values into a cost matrix. It's a standard feature in most data mining packages.
Did you take a look at the Results in Automodel. Each model provides a performance screen with the results you are requesting. Please take a look at an example I uploaded recently (near the end of the vid).
Sorry, but all I can see there is a regular confusion matrix. I need to be able to take into account uneven misclassification costs as well as prior probabilities. So, you need either something like:
If you go to the actual RapidMiner process for any given model that you generate using AutoModel, you can simply change the default performance operator to the Performance(Costs) operator, which lets you enter the cost matrix in exactly the way you are describing. You could also select one of the many other performance operators available. It's just not built into the AutoModel interface to do that (yet).
Can I see some examples of processes involving Performance (Cost) operator ? Also, is there an operator for prior probabilities too ?
Most basic operators (including Performance(Costs) contain tutorial processes built into the help for that operator. See the screenshot below. You can read about the operator in detail and then see a sample process with the operator configured.
In most cases the selected machine learning algorithm will derive the prior probabilities automatically from your dataset based on class occurrences. If your sample has been stratified or otherwise artificially constructed and you want to adjust that then you can always use the Generate Weight operator to do so.
Thanks for the info. Can you tell me which operator I can use to oversample rare events and when should I consider doing this in the first place ? If my prior target class probability is below 20,10 or 5% ? Thank you.
SMOTE of operator toolbox is one good operator for this.
Hello there, It's a standard feature in most data mining packages.
How to propery apply k-cross validation with oversampling (SMOTE) in Rapid Miner ? Can you show me the sample process, please ? Also, how do I adjust probablilities after oversampling ?
so as @mschmitz explained the SMOTE operator is part of the Operator Toolbox extension. You will need to use "normal" operators rather than Auto Model to do this.
You can access tutorials on how to use the SMOTE operator by putting the operator in your Design view, going to the Help panel, and then clicking "Jump To Tutorial Process".
Sorry, but I can't see such an operator in the toolbox. Where can I find it ?
hi @mattnikiel so if you download the Operator Toolbox from the marketplace, you should be able to see the operator here:
Ok, I got it. I'm wondering, is it posiible to perform oversampling during cross-validation just like described here:
I suppose it is possible, if I correctly understood what you want to achieve, looking at your link.
This way only training part of data will be upsampled: