facebookresearch / adaptive_teacher

This repo provides the source code for "Cross-Domain Adaptive Teacher for Object Detection".
Other
180 stars 35 forks source link

Why your DA-FRCNN implementation uses multi-scale training trick? #23

Open tmp12316 opened 2 years ago

tmp12316 commented 2 years ago

Thanks for your work, but I recently noticed another question about the input image scale.

As far as I know, the input min scale should be 600 for FRCNN-based DAOD frameworks, as shown in https://github.com/krumo/Domain-Adaptive-Faster-RCNN-PyTorch/blob/df0488405a7679552bc2504b973e29178c141b26/configs/da_faster_rcnn/e2e_da_faster_rcnn_R_50_C4_cityscapes_to_foggy_cityscapes.yaml#L24

But It seems that AT uses multi-scale training in all configs?https://github.com/facebookresearch/adaptive_teacher/blob/cba3c59cadfc9f1a3a676a82bf63d76579ab552b/configs/Base-RCNN-C4.yaml#L17

yujheli commented 2 years ago

Sry I did not copy the correct Base-RCNN-C4 I used internally. I copied the one from detectron2. Will update.

yujheli commented 2 years ago

Hi @tmp12316 , I tested with the correct config file without multi-scale trick and got 45.6 AP@50 on Clipart1k using batch size 4. Will update the experiment with more batch size once I have enough local training resources.

tmp12316 commented 2 years ago

@yujheli

Hi, I think that this result seems to be much more reasonable. Thanks for your kind reply!