how are you checking the quality of the model the second time? (What are you comparing to the performance output from the cross validation?)
If you apply the model on its own input data, then those results are simply irrelevant and shouldn't be used. You could use k-NN with k=1 in that scenario and get a 100 % correct model according to this wrong method of validation.
MetaCost uses the same order for the attribute values as you see in the confusion matrix. You can use the Remap Binominals operator to create a fixed "order" of your binominal attribute values.
I have left out of my data a control group. I then apply the model to this control group and that is when I have that difference between them. What I did was save the model that the cross-validation operator returns to me and then apply only this on my new data and the result is the desired one and at an enviable speed. This program is great.