Closed shwoo93 closed 2 years ago
Hi, please use the (1080, 1080) for testing to reproduce the number. You can also refer to this PR #40.
As suggested, I tried the input resolution of (1080, 1080). However, the mAP0.5 is still lower than the original. The mAP0.5 slightly decreases (13.8->13.6) while mAP0.75 increases (5.5->5.7).
Are you using the weights we uploaded? It should give 16.1 when test in 1080.
I am not using the weight provided. To reproduce the original scores from scratch, I am using the model trained with the follwoing config files. qdtrack_frcnn_r50_fpn_24e_lvis.py (pre-train) ft_qdtrack_frcnn_r50_fpn_24e_tao.py (fine-tune)
While the model trained from scratch gets lower validation scores (mAP0.5 13.8), it provides reasonable test scores (mAP0.5 12.6).
mAP0.5 | mAP0.75 | mAP[0.5:0.95] |
---|---|---|
12.6 | 4.5 | 5.6 |
12.4 | 4.5 | 5.2 |
Thanks for the feedback! The performance usually highly depends on your detector. For the training detector on LVIS, what data did you use?
I used the JSON file provided from the original TAO paper. It uses the combination of LVIS and COCO.
If the detector is the issue, could you provide the pretrained weight of LVIS detector? Then I could start from that again.
We uploaded the final weights. Maybe you can try to use the weights of the detector parts to init your models.
Hello,
When I train the model with your code for TAO (i.e., pretrain on LVIS and finetune on TAO-train), I get the following final results on TAO-val. which are lower than the scores reported in the original paper.
Are there any issues that I have to consider for getting the original score?
Thanks,