leondgarse / Keras_insightface

Insightface Keras implementation
MIT License
240 stars 56 forks source link

Validation accuracy discrepancy #97

Open jcsm89 opened 2 years ago

jcsm89 commented 2 years ago

Hey! I was testing some of the checkpoints that I've obtained during training with an offline script, and the validation accuracies seems to be consistently lower than the values presented throughout training and saved in the checkpoint filename. For example:

cfp_fp_epoch_10_batch_20000_0.993286.h5

I have this checkpoint, which represents the best validation checkpoint for CFP-FP. I expected the accuracy to be 0.993286 as presented in the filename. However when I run:

full_h5 = r'..\checkpoints\cfp_fp_epoch_10_batch_20000_0.993286.h5' bb = load_model(full_h5) eea = evals.eval_callback(lambda imms: bb(imms[:, :, :, ::-1]), r'C:\development\VBMatching\RecTool_FinalFix\cfp_fp.bin', batch_size=32) eea.on_epoch_end()

The ouput accuracy is 0.990000. This is a consistent behaviour to every checkpoint I've tested so far. Can there be some discrepancy between how this is done during training and how I'm trying to replicate it offline?

leondgarse commented 2 years ago

Technically and most time in my practices, they should be the same... I'm not sure, why using lambda imms: bb(imms[:, :, :, ::-1]) here? How about the accuracy using simply evals.eval_callback(bb, "xxx/bin")?