Hey! I was testing some of the checkpoints that I've obtained during training with an offline script, and the validation accuracies seems to be consistently lower than the values presented throughout training and saved in the checkpoint filename. For example:
cfp_fp_epoch_10_batch_20000_0.993286.h5
I have this checkpoint, which represents the best validation checkpoint for CFP-FP. I expected the accuracy to be 0.993286 as presented in the filename. However when I run:
The ouput accuracy is 0.990000. This is a consistent behaviour to every checkpoint I've tested so far. Can there be some discrepancy between how this is done during training and how I'm trying to replicate it offline?
Technically and most time in my practices, they should be the same...
I'm not sure, why using lambda imms: bb(imms[:, :, :, ::-1]) here? How about the accuracy using simply evals.eval_callback(bb, "xxx/bin")?
Hey! I was testing some of the checkpoints that I've obtained during training with an offline script, and the validation accuracies seems to be consistently lower than the values presented throughout training and saved in the checkpoint filename. For example:
cfp_fp_epoch_10_batch_20000_0.993286.h5
I have this checkpoint, which represents the best validation checkpoint for CFP-FP. I expected the accuracy to be 0.993286 as presented in the filename. However when I run:
full_h5 = r'..\checkpoints\cfp_fp_epoch_10_batch_20000_0.993286.h5' bb = load_model(full_h5) eea = evals.eval_callback(lambda imms: bb(imms[:, :, :, ::-1]), r'C:\development\VBMatching\RecTool_FinalFix\cfp_fp.bin', batch_size=32) eea.on_epoch_end()
The ouput accuracy is 0.990000. This is a consistent behaviour to every checkpoint I've tested so far. Can there be some discrepancy between how this is done during training and how I'm trying to replicate it offline?