Open Zongwei97 opened 3 years ago
Thank you for the brilliant job.
I am reproducing the result with GOT10k. With the checkpoint epoch 19, I cannot get the same result as presented in the paper (huge gap).
Should I test with each saved checkpoint from 10 to 19 ?
Thanks for your help
Hi, All the results in our paper are test with checkpoint epoch 20.
BTW, Our new model got much higher precision result, welcome to try: https://github.com/ohhhyeahhh/SiamGAT
@ohhhyeahhh 你好,最近在跑Siamcar的时候,训练了20个poch,log文件显示了训练了20个poch,但最终只保存了19个.pth(1-19)文件,为了能够保存第20个.pth文件,我在if epoch == cfg.TRAIN.EPOCH:下面加了 torch.save( {'epoch': epoch, 'state_dict': model.module.state_dict(), 'optimizer': optimizer.state_dict()}, cfg.TRAIN.SNAPSHOT_DIR+'/checkpoint_e%d.pth' % (epoch)) 但最终也只保存了19个.pth。这是什么呢?希望作者能帮忙解答下
@WangJun-ZJUT
Thank you for the brilliant job.
I am reproducing the result with GOT10k. With the checkpoint epoch 19, I cannot get the same result as presented in the paper (huge gap).
Should I test with each saved checkpoint from 10 to 19 ?
Thanks for your help