Closed run2 closed 9 years ago
To answer your first question: Yes, that's how the two sets are split up. It's perfectly reasonable to override train_test_split
with your own method in a subclass and do whatever you want. Not sure I understand the reasoning behind training on all folds. So you don't want to use a validation set? I guess that's something we could support ourselves. Maybe you can try and see what changes are required to the code to make this happen (say, when eval_size=None
). (Let's open a more specific issue or pull request for this one.)
Regarding the second question: just take a look at the train_history_
attribute; it's all in there.
Ok - Daniel When you use KFold, in every iter of the kf, you have a different set of train indices and test indices. I am sure you are with me but just to emphasize, if I am doing 3 fold CV on a 9 element sized array. The train indices from first iter may be 123456 and val indices 789 val. In second iter, the train indices can be 234567 train and 189 will be val indices and so on. If I take only the indices from the first iter, I will never train on some. So a generic approach is to do train and validation on all the iterations on the KFold for each model (parameters), store the validation results for all the KFold iterations, and use statistical measures across different models to compared the validation errors spread across the KFolds. So say if you are doing only on 1 iter of KFold, you might have the val error on one model as 7% and another model at 6%. You choose the second model. But this is only comparing validation on a particular set of the training instances. So - not quite right to compare. Better is to like 10 fold CV. Have 10 val errors for all the 10 iterations and then compare the distribution of the val errors across different models.
I will check the trainhistory attribute - but I guess it will not have the kind of history I just mentioned.
I see what you mean. So for a proper cross validation I think you want to use sklearn's utlities for that. They will give you a test set that the network will never see, and a train and validation set that the network uses for training and maybe early stopping. So the right thing to do is to evaluate the network on held-out test set that wasn't used to train nor to validate. Thus you'll be able to train however many networks that you want using cross-validation.
Regarding "validation losses in an epoch (across all the folds)", this isn't something that the network can do currently. NeuralNet trains one set of parameters; if you want to train multiple networks, say to do cross-vadliation, the right thing to do is to train multiple NeuralNets (again, check scikit-learn utilities for that). The train_history_
attribute only has validation losses for the single validation set that the single net uses.
Hi Daniel
If I understand correctly, the train validation split is done once for the whole training. Only using indices from the first fold. Once the indices are stored, for each epoch, all the batches within the train set is used to train the net and then the net is validated against the validation set. Right ? Is it possible to train on all the folds and get the average validation loss across all the folds ?
Also, continuing, is it possible to have an access to all the validation losses in an epoch (across all the folds), s.t, I can check the std and mean both rather than checking only the minimum to finalize the best epoch ?
Thanks