open-mmlab / mmdetection

OpenMMLab Detection Toolbox and Benchmark
https://mmdetection.readthedocs.io
Apache License 2.0
29.12k stars 9.38k forks source link

GN to BN in auto_assign head, mAP for coco drops 3%. #6991

Open zhangh-dev opened 2 years ago

zhangh-dev commented 2 years ago

GN to BN in auto_assign head, mAP for coco drops 3%.Do you encounter this problem?

Johnson-Wang commented 2 years ago

Naive BN implementation in retina-like head structure would face severe statistics mismatch among different FPN layer (see the explanation in Fig. 7 in https://openaccess.thecvf.com/content_CVPR_2020/papers/Wang_Scale-Equalizing_Pyramid_Convolution_for_Object_Detection_CVPR_2020_paper.pdf).

zhangh-dev commented 2 years ago

Naive BN implementation in retina-like head structure would face severe statistics mismatch among different FPN layer (see the explanation in Fig. 7 in https://openaccess.thecvf.com/content_CVPR_2020/papers/Wang_Scale-Equalizing_Pyramid_Convolution_for_Object_Detection_CVPR_2020_paper.pdf).

Isn't SyncBN supported at present for head ? SyncBN seems to work. Any other advices to use BN but without dropping mAP value?

Johnson-Wang commented 2 years ago

Yes. SyncBN also does the job.

Johnson-Wang commented 2 years ago

You may also try SepBN (https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/dense_heads/retina_sepbn_head.py) and increase the batch size.

zhangh-dev commented 2 years ago

You may also try SepBN (https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/dense_heads/retina_sepbn_head.py) and increase the batch size.

See your experiments in v2.2.0, for FCOS head, without using GN, mAP only drop 0.4%. But for autoassign, through my experiment, without using GN, mAP will drop 4%. I don't know why?