davidsandberg / facenet

Face recognition using Tensorflow
MIT License
13.83k stars 4.81k forks source link

New results: 0.988/0.942 with CASIA #423

Closed JianbangZ closed 7 years ago

JianbangZ commented 7 years ago

This result mostly come from the modification of center loss. Instead of updating centers manually according the paper, I made centers as learnable variable and applied a little bit weight decay to it. My CASIA has 448k images.

Accuracy: 0.988+-0.003 Validation rate: 0.94233+-0.01521 @ FAR=0.00133 Area Under Curve (AUC): 0.999 Equal Error Rate (EER): 0.012

I also have some preliminary results on MsCeleb (4.5 million). Still in the trials of adjusting hyper parameters, but so far the best number I got is

Accuracy: 0.9963+-0.004 Validation rate: 0.99330+-0.00775 @ FAR=0.0067

ugtony commented 7 years ago

It's an interesting idea to add weight decay to the centers. As reported in #391, before the center loss update procedure was fixed, the center loss was not updated (always zero), which makes the center loss equal to the weight decay loss you apply here. Add weight decay seems worked, but it's not intuitive for me why the centers should be pushed toward zero. Is it possible to constrain the centers' l2-norm by adding some loss? (somewhat similar to l2-softmax)

JianbangZ commented 7 years ago

@ugtony I didn't think hard why I should apply weight decay. It's just what I usually do to apply weight decay to trianable layer/variable so it can be properly updated overtime. I also have the latest MSCELB results, which is 99.633%/TPR 99.33%

qiqiguaitm commented 7 years ago

Could you tell how you clean MsCeleb (10million) to (4.5 million)? @JianbangZ

sidgan commented 7 years ago

@JianbangZ Could you please provide your casia cleaning script, or the casia dataset, and the code file for the modification to the center loss.