Forward testing is your only option but this also goes for models that generalize well. With regression problems you will often find that the learner settles on the direction of the last value being being the best predictor of the next value. In this case, the value of the prediction is questionable and is most likely caused by over fitting. You also have to understand your data. If your data is noisy with little serial correlation and closely resembles a random walk and at the same time your testing is giving unusually good results then it is safe assume that you have a problem. Too good to be true also applies to machine learning.
Nevertheless, this is a good question that everyone will have to deal with at some point. The best procedure is to establish a baseline. 1 - Build your forecast using a default model. 2 - Determine which learner is most suitable for the data and 3- Forward test on unseen data. Not spending enough time on 1 and 3 is where most people go wrong.
0
MartinLiebigAdministrator, Moderator, Employee, RapidMiner Certified Analyst, RapidMiner Certified Expert, University ProfessorPosts: 3,453 RM Data Scientist
Hi,
my main question is: Do i care? Overfitting means, that my training-performance is better than my testing performance. If my correctly validated test performance is good, i am usually fine.
BR,
Martin
- Sr. Director Data Solutions, Altair RapidMiner - Dortmund, Germany
I think cross-validation is the standard baseline approach to measuring your performance. As @mschmitz says, a bit of overfitting is almost inevitable with any ML algorithm. The question isn't really "is this model overfit" as much as it is "how will this model perform on unseen data"? Cross-validation is the best way to answer that question while still making use of the most possible information in both training and testing your model.
Brian T. Lindon Ventures Data Science Consulting from Certified RapidMiner Experts
If on a test set you got an error of 0.001 or AUC = 99.95, then something is certainly wrong. Any 'too good to be true' result may generally indicate overfitting. Also, use correlation matrix to see if some attributes correlate too high with the label.
Answers
Nevertheless, this is a good question that everyone will have to deal with at some point. The best procedure is to establish a baseline. 1 - Build your forecast using a default model. 2 - Determine which learner is most suitable for the data and 3- Forward test on unseen data. Not spending enough time on 1 and 3 is where most people go wrong.
Dortmund, Germany
Lindon Ventures
Data Science Consulting from Certified RapidMiner Experts
To add up to previous answers: use common sense
If on a test set you got an error of 0.001 or AUC = 99.95, then something is certainly wrong. Any 'too good to be true' result may generally indicate overfitting. Also, use correlation matrix to see if some attributes correlate too high with the label.
Vladimir
http://whatthefraud.wtf