Options

MetaCost

ilaria_goriilaria_gori Member Posts: 15 Maven
edited November 2018 in Help
Dear all,
I have some problem in understanding how the MetaCost works. Could you help me, please?

I will try to describe what it's not clear:

Here is what I understood: I read that MetaCost is a "bagging with cost": first step: N models are built with bagging and they are used, together with the cost matrix, to associate to each training instance a "prediction" minimizing the expected cost. Second step: these predictions are used as labels to train another single classifier which is the final model.

Here is my experience: if I train a classifier with cost_matrix [0 1;1 0] or with [0 2;1 0] from the same training set, I obtain two models which do not differ, i.e.when I apply them to the same set, I have the same ROC curve, the same outputs for each example. The only thing that changes is the operative point on the ROC curve to which sens and spec are calculated.
This should be true with the first step of Metacost (bagging and  construction of "predictions" minimizing the expected cost), but how it can be true with the second step? That is to say, how it can be true if  a "final model" is learnt by using as labels the predictions obtained in the first step?

I would be very grateful to you if you could explain me what I did not understand about this procedure.

Thanks a lot

ilaria
Sign In or Register to comment.