wy1iu / LargeMargin_Softmax_Loss

Implementation for <Large-Margin Softmax Loss for Convolutional Neural Networks> in ICML'16.
Other
350 stars 115 forks source link

Why lambda_min not used in Backward? #2

Closed luoyetx closed 7 years ago

luoyetx commented 7 years ago

As the title described. In forward part, lambda won't be smaller than lambda_min, but in backward part, it doesn't compare with lambda_min. Is there a reason to do so or just a mistake ?

wy1iu commented 7 years ago

There is no need to compare lambda in back prop. If lambda reaches lambda_min, it will stay unchanged during the rest of the optimization.

luoyetx commented 7 years ago

then, I think this line and this line won't promise. Since iter_ is always increased by 1 during forward, lambda will be smaller than lambda_min. Right?

wy1iu commented 7 years ago

Yes, you are right about this. My apologies. It seems like I commit an old version of my code. Without lambda_min, it is okay for MNIST and CIFAR10, which is the reason I did not find out this mistake. But in the implementation for LFW and CIFAR100, these lines has been deleted. You will need lambda_min to reproduce performance in LFW and CIFAR100. Really sorry about that. I will fix this bug now.

wy1iu commented 7 years ago

Thanks very much for reporting the bug. Really appreciate it. :)