Closed Sharpiless closed 1 year ago
Hey, did you use relu as the activation function?
# for dn_deformable_detr: 49.5 AP
python main.py -m dn_deformable_detr \
--output_dir logs/dab_deformable_detr/R50 \
--batch_size 1 \
--coco_path /path/to/your/COCODIR \ # replace the args to your COCO path
--resume /path/to/our/checkpoint \ # replace the args to your checkpoint path
--transformer_activation relu \
--use_dn \
--eval
This is the command I used to test:
python -m torch.distributed.launch --nproc_per_node=2 --master_port=2071 \
main.py -m dn_dab_deformable_detr \
--output_dir logs/dn_dab_deformable_detr-exp13/R50 \
--batch_size 1 \
--lr 2e-4 \
--lr_backbone 2e-5 \
--epochs 12 \
--target_task large \
--lr_drop 11 \
--transformer_activation relu \
--coco_path datasets/coco \
--resume checkpoint0049.pth \
--use_dn --eval
It seems you have modified the code and added many other parameters. You can clone our clean code and evaluate with the provided command to reproduce our results.
I download checkpoint0049.pth from this url. It namely seems to be model trained in 50 epochs. But I got the results below when testing:
Is this normal?