Compare predicted results from deep learning to actual in the validation set
I am a beginner so I apologize in advance if this is obvious, but the online chat folks suggested I post here!
I am trying to train a deep neural network to make a binary prediction ("hard" vs "easy") based on a bunch of real number parameters and a couple of nominal parameters. I input the data from excel for the labelled training set and put a set role block to indicate the "answer" called "class" as a label. Then I passed the data to the deep learning block. I took the trained model and used a apply model block, giving an unlabelled validation set of data as the input. Wired both outputs to the results on the far right. What I get is the assigned predictions in a new column ("Prediction(class)" where "class" was the label). What I need to do now is see how well it did by comparing the actual to the prediction. Because the validation set is unlabeled, it's not present in that excel. I have them of course, in the original data, but I had removed them to make the validation set unlabeled. So basically I want to evaluate the performance of the prediction.
My wiring and output data are appended.
Thanks so much!