microsoft / FocalNet

[NeurIPS 2022] Official code for "Focal Modulation Networks"
MIT License
690 stars 63 forks source link

Issue reproducing evaluation metric for FocalNet+DINO+O365pretrain #29

Closed RockeyCoss closed 1 year ago

RockeyCoss commented 1 year ago

Thank you for your great work! However, I am having difficulty reproducing the evaluation metric for the model open-sourced in link. Specifically, my evaluation results is 0.3 AP lower than that you reported in the README. 图片 My command used to run the evaluation is:

python -m torch.distributed.launch --nproc_per_node=4 main.py \
  --output_dir output/path \
    -c config/DINO/DINO_5scale_focalnet_large_fl4.py --coco_path coco/path  \
    --eval --resume checkpoint/path

Could you please help me with this issue? I would be grateful if you could provide some guidance on what I might be doing wrong, or if you could share any additional details about the exact process that you used to compute the evaluation metric. Thank you very much!

jwyang commented 1 year ago

Hi, @RockeyCoss , thanks for your interest!

I think you are using the default 800x1333 image resolution for your evaluation. Can you change your base in DINO_5scale_focalnet_large_fl4.py with https://github.com/FocalNet/FocalNet-DINO/blob/main/config/DINO/coco_transformer_hres.py?

songyuc commented 1 year ago

Hi, @jwyang, would you consider merging FocalNet/FocalNet-DINO into this repo?