RapidMiner 9.7 is Now Available

Lots of amazing new improvements including true version control! Learn more about what's new here.


Gut feeling doctors more predictive than machine learning, what is missing?

DocMusherDocMusher Member Posts: 329   Unicorn
edited December 2018 in Help

Dear RM friends,

I would love to see a discussion, feedback, views or comments on these findings. How could these "gut feeling" be captured and translated into ML?


We are interested as we noticed similar findings in the preparation of this chapter:







  • BalazsBaranyBalazsBarany Administrator, Moderator, Employee, RapidMiner Certified Analyst, RapidMiner Certified Expert Posts: 465   Unicorn



    This sounds interesting, but I'm not sure if the question they answer there is the right one.

    If the gut feelings just predict the number of tests done on the patients but not the outcome (becoming healthy again), is the system trained to do the right thing? Wouldn't it be more interesting to model the outcome with the gut feelings and the executed tests going into the model as attributes?




  • MichaelMichael Administrator, Employee, RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 18  RM Data Scientist

    Wouldn't it be more interesting to model the outcome with the gut feelings and the executed tests going into the model as attributes?

    Alternatively, one could try to break down the number of tests ordered into something more meaningful. Maybe it is possible to extract the number of additional tests ordered with results that lead to a change in treatment. Or the number of additional tests that cause any harm (side-effects).

  • SGolbertSGolbert RapidMiner Certified Analyst, Member Posts: 344   Unicorn



    I agree with Balazs. While surely it is possible to predict the number of extra test based on past information (perhaps previous illnesses, hospital stays, data about the family, etc.), we are assuming that the "gut feeling" is correct.


    I have seen enough doctors to cathegorically doubt that (O.T.: when you are ill, read about diagnostics and treatments. Doctors are often wrong, forget things or are lazy). A model that based on that previous information predicts possible culprit of the symptoms, which then must be tested, would make more sense in my opinion.




  • DocMusherDocMusher Member Posts: 329   Unicorn
    From my dual perspective (anesthesiologist > daytime, data scientist>night time), gut feeling is part of the early lack of data be it as signs or symptoms. In an emergency department or critical care setting decisions are made in a splitsecond, if not the patiënt is dead. Did this patiënt delivered my sufficient data for my action, no. That's the reason we worked on rapidly finding the most simular cases to help in the decision taking process (we named the project Resemble, find more on my researchgate Page). In fact it's still experience giving us more power than data at that stage. Another problem continues to be lack of good ontologies. But I have a workflow started for dealing with data is being generated today, not in an ideal world, followed by using it for ML.
  • DocMusherDocMusher Member Posts: 329   Unicorn
    This is a fundamental discussion that need to take place before both communities really can make the difference! Please continue to discuss!
  • DocMusherDocMusher Member Posts: 329   Unicorn

    Just to add a bit context on the number of tests requested. Imagine each patient could walk through an MRI scan every day and is tested for any test available. This would give a good starting point in the quest for a diagnosis. However, the cost is to be balanced with the added value of each investigation. However if a test is available in your setting and you decide not to use a test, examination or investigation, leading to a bad outcome, legal implications can follow why the available diagnostical tools were not used. In other words medicine is a "serious game" finding a safe path with cost, outcome and legal impact in mind.



Sign In or Register to comment.