Closed echo5380 closed 11 months ago
I chose the ckpt after the last epoch.
Thanks!
I chose the ckpt after the last epoch.
Hello, I want to make sure the result is got by performing validation on val?
The checkpoint utilized is the final one (the 12th), rather than the best checkpoint based on the validation set. I apologize for any confusion caused, but it is important to note that the default configurations involve training on the trainval set and validating on the val set. The purpose of the validation set is solely to identify any unexpected behaviors during training, such as gradients becoming NaN. Hence, you can easily employ the last checkpoint for testing.
Based on the above results, I am more confused...
is it the result of single-scale training testing?
yes.
single scale test mAP for lsk-s should be around 78. please check if your training setting is aligned with the provided configs.
I use the 'lsk_s_ema_fpn_1x_dota_le90.py' and set the lr=0.0001 on 4 GPUs,other parameters are not changed. I check the config file is no problem. And the mAP on Val_Datasets is normal, but the mAP on Test_Datasets is not normal. Now I try to use your ckpt to test my DOTA-V1.0-Test_Datasets to check the results.
Branch
master branch https://mmrotate.readthedocs.io/en/latest/
📚 The doc issue
Hello author, may I ask how the checkpoints of the model you provided were selected? Do you train on trainval, then perform validation on val every 3 epochs, and take the epoch with the highest validation mAP? Or should we take the epoch with the lowest training loss?
Suggest a potential alternative/fix
No response