🎉 🎉. RAPIDMINER 9.8 IS OUT!!! 🎉 🎉
RapidMiner 9.8 continues to innovate in data science collaboration, connectivity and governance
"Maximum theoretical accuracy"
I've been wondering, is there a theory on the maximum accuracy one can achieve on a given example set without over-fitting? I'm looking for something that could tell me, "whatever method you use to do regression/classification on this dataset, you'll never achieve over x% accuracy" , or something along those lines.
I tried a quick Google search, but didn't find anything along those lines, so I'm asking the experts now
Thanks for any leads on the subject!