Open abhik-nd opened 2 years ago
Hi,
Thank you for your interest. In your step 3, are you using configs/C2_LVISCOCO_DR2101_4x.yaml
or configs/GTR_TAO_DR2101.yaml
? configs/C2_LVISCOCO_DR2101_4x.yaml
does not train the tracking part and is used for pertaining only. Can you try GTR_TAO_DR2101?
Best, Xingyi
I'm following the instructions here to reproduce the transformer head fine tuning on TAO here: https://github.com/xingyizhou/GTR/blob/master/docs/MODEL_ZOO.md#tao and I can't seem to get the results reported in the MODEL_ZOO or paper.
Here are the steps I'm following:
python train_net.py --num-gpus 8 --config-file configs/C2_LVISCOCO_DR2101_4x.yaml MODEL.WEIGHTS models/C2_LVISCOCO_DR2101_4x.pth
This took about 6 days on 8 Titan X GPUs.The reason I believe it didn't train properly is because when I run TAO validation on the output model of the training using:
python train_net.py --config-file configs/GTR_TAO_DR2101.yaml --eval-only MODEL.WEIGHTS output/GTR_TAO_first_train/C2_LVISCOCO_DR2101_4x/model_final.pth
the mAP is 10.6 but when I run TAO validation on the pretraind model, GTR_TAO_DR2101.pth, downloaded from MODEL_ZOO:python train_net.py --config-file configs/GTR_TAO_DR2101.yaml --eval-only MODEL.WEIGHTS models/GTR_TAO_DR2101.pth
the output is correct 22.5 mAP as reported.Any ideas why the model training isn't working correctly? Am i using the wrong configurations or something?