Options

How would you evaluate a campaign sales response model?

RapiditoRapidito Member Posts: 13 Contributor II
Say, for example, a company wants to completely remake their campaign segmentation technique, basing it solely on a DM model. There is, therefore, campaign history information.

How would you evaluate results?

If you just consider the clients that were on campaigns, isn't it possible for the model to be ignoring well known traditional segmentation rules as, for example, in a telco, not offering a credit upgrade to someone that isn't quite using his line?

I was considering filling the false class with all the rest of the clients, not just the ones that took part in the campaign. What do you think about this?

Answers

  • Options
    IngoRMIngoRM Administrator, Moderator, Employee, RapidMiner Certified Analyst, RapidMiner Certified Expert, Community Manager, RMResearcher, Member, University Professor Posts: 1,751 RM Founder
    Hi,

    I was considering filling the false class with all the rest of the clients, not just the ones that took part in the campaign. What do you think about this?
    hmm, this could actually lead to the problem, that in this set of clients additional "good ones" might exist (which is good for the data mining model since it finds them as well...) but also too many clients would also gain a high score but would have been ruled out by "outer" criteria. Then people often argue: "Why did the data mining model did not find out that we do not want to consider these?" The answer is "Because neither the training examples nor the influence factors covered this" is not really helpful here.

    I would suggest to check if attributes exist in your data which would make this "ruling out" possible at all. The second thing to check is if your negative training examples also contained clients which would have been ruled out for those outer constraints. If the answer to both questions is "yes" you can probably go ahead and simply add all other clients to the "false clients". Otherwise I would check if you can optimize your data and / or your analysis processes.

    If the attributes are available for checking the outer constraints but the modelling already happened on the wrong data base or if the negative clients were selected differently, you can still go for a semi-automatic approach and manually de-select the customers before the scoring or even afterwards by setting the score to 0 (or whatever scale is appropriate).

    Just a few thoughts.

    Cheers,
    Ingo
  • Options
    RapiditoRapidito Member Posts: 13 Contributor II
    Ingo Mierswa wrote:
    hmm, this could actually lead to the problem, that in this set of clients additional "good ones" might exist (which is good for the data mining model since it finds them as well...) but also too many clients would also gain a high score but would have been ruled out by "outer" criteria. Then people often argue: "Why did the data mining model did not find out that we do not want to consider these?" The answer is "Because neither the training examples nor the influence factors covered this" is not really helpful here.
    Actually what I'm thinking about is that preparing data without the rest of the clients would potentially include non-interesting cases. Consider label: true, false, badtrue. Consider predictor X [0-30], and then:

    [0-10) = false.
    [10-20) = true.
    [20-30] = badtrue.

    Not including the badtrue cases would give a model that says, if X < 10 then FALSE else TRUE. Including them would add an extra rule to the model, capable now of ruling out the badtrues.

    I was not specifically talking about "badtrue", I was refering to "unknownfalse", the example above applies as well. BUT- consider it can also be "unknowntrue", get it? I think we have quite a dilemma here... how to make a model that excludes unknown falses, without excluding unknown trues? Although, being positive, technically unknown trues would be closer to the known trues though, right? Therefore unknown trues should get higher scores than ruled out falses.

    I make myself clearer then, with "well known traditional segmentation rules" I don't refer to risk ruling out in insurance companies, but, offering a car insurance to someone that doesn't have a car (in the same company example). If the model is made only with CAR = YES then CAR isn't actually used, and if it's applied to the whole database then we will surely find good values in our predictors + CAR = NO. Of course, this obvious problem would be spotted, but I'm trying to make a point: there might be other rules that aren't as obvious or as hard as the car example, or even as regular as. If a model contradicts a more flexible factor, then it will probably go unnoticed.

    I think that probably a multiple evaluation system - including non-campaign clients and not including them - would be closer to the actual results, but I can't quite figure it out. We are talking OF COURSE about a case where the company wishes to use only the model and the model only, not apply at the end of the segmentation chain.

    What do you think?

    Thanks :)

    Cheers.
  • Options
    IngoRMIngoRM Administrator, Moderator, Employee, RapidMiner Certified Analyst, RapidMiner Certified Expert, Community Manager, RMResearcher, Member, University Professor Posts: 1,751 RM Founder
    Hi,

    What do you think?
    without being engaged in a project with you, actually working on your data and having the chance to check what are the exact types of the concrete problems: Nothing  :)

    You made your point and it sounds reasonable. But without knowing every necessary detail I cannot really give you reliable hints or let you participate from my experience here, sorry. But be assured that even a multi-stage process is easily possible with RapidMiner and if you are convinced that this is the right way and it really turns out to work for your particular problem: Go for it! Or hire somebody from Rapid-I for analyzing the data and the inherent analysis problems and let's setup the processes and models together. It's up to you  ;)

    Cheers,
    Ingo
  • Options
    RapiditoRapidito Member Posts: 13 Contributor II
    Hello Ingo, thank you for your reply.

    I was actually trying to discuss what would be a good evaluation for general sales response models, and by evaluation I don't mean to include post-deployment evaluation, that would be actual results. "General sales response model" would have to treat... a very large client database, and quite small campaigns databases - probably segmented with specific, logic business rules - with small response rates. Strictly, my question is: if you only evaluate on the small campaigns databases, how can you be sure that when you apply the model to the very large client database there won't be a considerable change in the enviroment? Shouldn't the evaluation contain non-campaign clients as well? That's all.

    Not talking specifically about Rapid, just DM in this particular and well known case - seen in many books...

    ... but we all know books aren't quite specific regarding key points, nobody wants to just give away their little secrets.

    In this case I am trying to arrive to a little-secret conclusion between ourselves.

    Cheers!
Sign In or Register to comment.