Open zhangh-dev opened 2 years ago
Naive BN implementation in retina-like head structure would face severe statistics mismatch among different FPN layer (see the explanation in Fig. 7 in https://openaccess.thecvf.com/content_CVPR_2020/papers/Wang_Scale-Equalizing_Pyramid_Convolution_for_Object_Detection_CVPR_2020_paper.pdf).
Naive BN implementation in retina-like head structure would face severe statistics mismatch among different FPN layer (see the explanation in Fig. 7 in https://openaccess.thecvf.com/content_CVPR_2020/papers/Wang_Scale-Equalizing_Pyramid_Convolution_for_Object_Detection_CVPR_2020_paper.pdf).
Isn't SyncBN supported at present for head ? SyncBN seems to work. Any other advices to use BN but without dropping mAP value?
Yes. SyncBN also does the job.
You may also try SepBN (https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/dense_heads/retina_sepbn_head.py) and increase the batch size.
You may also try SepBN (https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/dense_heads/retina_sepbn_head.py) and increase the batch size.
See your experiments in v2.2.0, for FCOS head, without using GN, mAP only drop 0.4%. But for autoassign, through my experiment, without using GN, mAP will drop 4%. I don't know why?
GN to BN in auto_assign head, mAP for coco drops 3%.Do you encounter this problem?