Closed hvgazula closed 7 months ago
Since we are already checkpointing the model after every epoch, I suggest we should consider evaluation after training finishes. @satra thoughts?
aren't you passing dataset.dataset
to keras/tensorflow?
also aren't you giving two different datasets for training and evaluation? currently the last n samples are used for eval, but randomizing is a function of adjusting the order of files given to from_files when creating the records.
addressed saving predictions on test images as png files here https://github.com/neuronets/nobrainer_training_scripts/commit/b9c18dadffaedc2c9bd2f12ccc4ab9485a9d1bb6
For example, if I want to evaluate random samples at the end of each epoch. I envision using a custom callback as follows. However, the problem with this approach is the nobrainer dataset object cannot be indexed as it does not have a
__len__
method and hence we cannot randomly sample the test examples.If we pass the list of test files instead, it entails calling nib.load, one-hot encoding the labels, and then creating the dataset object which can then be sent to
model.evaluate
.