xingyizhou / GTR

Global Tracking Transformers, CVPR 2022
371 stars 57 forks source link

Reproducing Transformer Fine Tuning - TAO #27

Open abhik-nd opened 2 years ago

abhik-nd commented 2 years ago

I'm following the instructions here to reproduce the transformer head fine tuning on TAO here: https://github.com/xingyizhou/GTR/blob/master/docs/MODEL_ZOO.md#tao and I can't seem to get the results reported in the MODEL_ZOO or paper.

Here are the steps I'm following:

  1. Download and setup the datasets as described here: https://github.com/xingyizhou/GTR/tree/master/datasets
  2. Download the trained detection model C2_LVISCOCO_DR2101_4x.pth from the link in the third bullet point under note section in TAO and place it in a models/ directory. The link for the config is broken in this bullet point but I'm using the C2_LVISCOCO_DR2101_4x.yaml in configs/ folder
  3. run python train_net.py --num-gpus 8 --config-file configs/C2_LVISCOCO_DR2101_4x.yaml MODEL.WEIGHTS models/C2_LVISCOCO_DR2101_4x.pth This took about 6 days on 8 Titan X GPUs.

The reason I believe it didn't train properly is because when I run TAO validation on the output model of the training using: python train_net.py --config-file configs/GTR_TAO_DR2101.yaml --eval-only MODEL.WEIGHTS output/GTR_TAO_first_train/C2_LVISCOCO_DR2101_4x/model_final.pth the mAP is 10.6 but when I run TAO validation on the pretraind model, GTR_TAO_DR2101.pth, downloaded from MODEL_ZOO: python train_net.py --config-file configs/GTR_TAO_DR2101.yaml --eval-only MODEL.WEIGHTS models/GTR_TAO_DR2101.pth the output is correct 22.5 mAP as reported.

Any ideas why the model training isn't working correctly? Am i using the wrong configurations or something?

xingyizhou commented 2 years ago

Hi,

Thank you for your interest. In your step 3, are you using configs/C2_LVISCOCO_DR2101_4x.yaml or configs/GTR_TAO_DR2101.yaml? configs/C2_LVISCOCO_DR2101_4x.yaml does not train the tracking part and is used for pertaining only. Can you try GTR_TAO_DR2101?

Best, Xingyi