Model validation performance

Muhammed_Fatih_Muhammed_Fatih_ Member Posts: 93 Maven
Hello together, 

which validation performance (with regard to learning and testing phase) of classification models is quicker? Cross-calidation or the classical split validation (with a 70:30 split)? 

Thank you in advance for your help! 

Best regard, 


Best Answers

  • Options
    varunm1varunm1 Moderator, Member Posts: 1,207 Unicorn
    edited November 2019 Solution Accepted
    Split validation is quicker, it builds model only once and then tests on the dataset. Incase of cross-validation, the model is built k+1 times (K is the number of folds).

    I didn't encounter any special case where cross-validation performed faster than split. I don't think it happens if all other settings are the same (Feature selection, hyperparameters, etc).

    Maybe if you use a processor with multiple cores and each cross-validation process is run parallelly, then there might be a chance based on the fold sizes. But in general, the above is fine.

    Hope this helps.

    Be Safe. Follow precautions and Maintain Social Distancing

  • Options
    MartinLiebigMartinLiebig Administrator, Moderator, Employee, RapidMiner Certified Analyst, RapidMiner Certified Expert, University Professor Posts: 3,517 RM Data Scientist
    Solution Accepted
    Cross-validation is the more accurate estimator of the true model performance.

    - Sr. Director Data Solutions, Altair RapidMiner -
    Dortmund, Germany


  • Options
    Muhammed_Fatih_Muhammed_Fatih_ Member Posts: 93 Maven
    Hi varunm, 

    thank you four your answer! An additional question - is there a possibility to say that one of the two validation processes (Split Validation vs. Cross-Validation) performs better in general with regard to learning and testing? 

    Best regards! 
Sign In or Register to comment.