Closed fengliqiu closed 2 years ago
Hi @fengliqiu,
Thanks for taking an interest in our work.
If I remember this correctly, I conducted cross-validation specifically for the number of epochs and found that the model overfits quite badly beyond 20 epochs. The mAP actually drops as training proceeds further. So 20 epochs is sort of a naÏve early stopping strategy. But feel free to attempt longer training schedules yourself!
Cheers, Fred.
Thanks for your reply @fredzzhang . Wonderful work!I understand it.
Thank you. Cheers!
Hi ,
Thanks for your great work,it inspired me a lot. I noticed that in your paper, the model can converge to a good result within 20 epochs. I wonder if you have tried to train the models for more epochs(e.g. 100 epochs or more?) to get better results?To be honest,I really want to know the boundary(or the best results)of the model. For example,by training more epochs or by designing more reasonable training strategies,the model maybe can reach a better mAP?