wy1iu / LargeMargin_Softmax_Loss

Implementation for <Large-Margin Softmax Loss for Convolutional Neural Networks> in ICML'16.
Other
350 stars 115 forks source link

reproduce cifar result #3

Closed jiangxuehan closed 7 years ago

jiangxuehan commented 7 years ago

I only got 8.8% error rate when reproducing cifar10 example with this repository after training 22000 iterations,and the loss exploded(87.3365) after training 23000 iterations. Has anyone met similar problems or reproduced paper's result successfully?

xiaoboCASIA commented 7 years ago

I also have the same problem on cifar10, the loss will explode after some (about 12000) iterations.

wy1iu commented 7 years ago

The prototxt actually works fine in my PC. In general, L-Softmax loss is indeed a little bit difficult to optimize since it defines a more difficult task than Softmax loss. However, the training prototxt file I provide is simply an example of how to use L-Softmax loss. If your network diverges, you should consider to change the parameters such as base, gamma, power, etc. to make lambda decrease more smoothly. There are more than one set of parameters that could achieve the performance in the paper. The last thing you can do if your network still diverges is to assign a small positive value to lambda_min, say 0.5. (it is a very rare case that you need to change lambda_min for CIFAR 10.) Also note that, you should use m=4 in L-Softmax loss if you want to reproduce the reported performance, while in the CIFAR10 example prototxt, m is set to 2. Thanks for these feedbacks.

wy1iu commented 7 years ago

BTW, I just fix a bug about lambda_min. You guys should use the new version.