weiliu89 / caffe

Caffe: a fast open framework for deep learning.
http://caffe.berkeleyvision.org/
Other
4.77k stars 1.67k forks source link

Any reason to delete 'sigma' in smooth_L1_loss_layer? #639

Open jianan86 opened 7 years ago

jianan86 commented 7 years ago

Hi, I find that the smooth L1 loss layer in SSD is a little different from faster-rcnn .What motivated you to delete 'sigma' in smooth L1 loss layer? Is the parameter 'sigma' only suitable for RPN? Really appreciate if anyone could give me some references or explanations on it.

wk910930 commented 7 years ago

I guess the sigma value doesn't make much change to the final results.

gauenk commented 6 years ago

Yeah but it seems odd to go out of his way to modify the code and eliminate sigma... If he doesn't want it, just set it to one... @weiliu89 Do you mind shedding some light on this?