dataset for parameter optimization

makakmakak Member Posts: 13 Contributor II
edited November 2018 in Help
Hi all,

the ideal situation is to have 3 separate sets: for training, testing (parameter optimization) and validation. What if I train on 70%, optimize parameters on resting 30% and finally I evaluate whole dataset(100%) performance by 10-fold cross-validation. Is this correct or am I risking some overfitting this way?
And one more little question, maybe little out of point, but anyway, I have always exactly same micro and macro average from cross-validation. Is this ok, or it seems suspicious?

Thank you.
Sign In or Register to comment.