biubug6 / Pytorch_Retinaface

Retinaface get 80.99% in widerface hard val using mobilenet0.25.
MIT License
2.63k stars 774 forks source link

What is the difference between Pytorch (same parameter with Mxnet) and Pytorch (original image scale) #14

Open wangergou135 opened 5 years ago

wangergou135 commented 5 years ago

Hi, Anyone can tell me, What is the difference between Pytorch (same parameter with Mxnet) and Pytorch (original image scale)? How to train the model of Pytorch (same parameter with Mxnet) or how to get 79.69% on wider face hard set using Mobilenet0.25?
Thanks!

biubug6 commented 5 years ago

same parameter with Mxnet: scale image to certain scope when testing. original image scale : scale = 1 when testing. If you want to train model, you can follow readme.md(training part) .

wangergou135 commented 5 years ago

Thanks, I get it. The official code in Mxnet just resize the model's min_max size to [1024, 1920] to get a much better result on wider face hard set.
Have you ever tried adding anchor_size=8 and other tricks or using another backbone to improve the result on scale=1? I will have a test if you haven't.

biubug6 commented 5 years ago

If you only pay attention to the face of certain scale, you can try control sampling scale or image size when training. You can also try other backbone.

SnowRipple commented 5 years ago

@biubug6 any reason why you don't normaize input data to [0,1] range? There arebatch normalization layers in the models, I heard that training works better with bn when input is normalized?

wangergou135 commented 5 years ago

If you only pay attention to the face of certain scale, you can try control sampling scale or image size when training. You can also try other backbone.

I will test it by setting sampling scale at [4,8,16].

lfpan commented 4 years ago

原始文献(Mxnet)中的test_widerface.py使用了如下所示的多尺度的测试方式,这里所说的相同参数,是也使用了多尺度测试吗?请问一下在这个项目中如果也要用多尺度测试的话,代码该如何修改,或者您当时进行多尺度测试时的代码可否分享一下。非常感谢

TEST_SCALES = [500, 800, 1200, 1600]

TEST_SCALES = [500, 800, 1100, 1400, 1700]
target_size = 800
max_size = 1200
im_shape = im.shape
im_size_min = np.min(im_shape[0:2])
im_size_max = np.max(im_shape[0:2])
im_scale = float(target_size) / float(im_size_min)
# prevent bigger axis from being more than max_size:
if np.round(im_scale * im_size_max) > max_size:
    im_scale = float(max_size) / float(im_size_max)
scales = [float(scale)/target_size*im_scale for scale in TEST_SCALES]