happynear / FaceVerification

An Experimental Implementation of Face Verification, 96.8% on LFW.
447 stars 234 forks source link

Train centerloss face with L2 normalization layer #48

Closed xingwangsfu closed 7 years ago

xingwangsfu commented 7 years ago

Hi,

I'm currently trying to train the center loss face with L2 normalization layer, more specifically, adding a L2 normalization layer after fc5 before feeding it into the last FC layer. However, after adding this L2 normalization layer, softmax loss is decreasing very slow, compared to the one without L2 normalization layer.

Suggested by others in this issue that the initialization after L2 normalization layer should be carefully chosen, I tried uniform, Gaussian and Xavier. Only uniform can make the softmax loss decease a little bit faster, but still very slow.

Do you have an idea how to train the center loss face with L2 normalization layer? I assume you also used this layer during the training of centerloss face with MS-Celeb-1M since I found this layer in your provided prototxt file, although you commented it.

Any suggestion is appreciated.

happynear commented 7 years ago

I am writing a paper to describe how to train the normalized features. You will see it on arxiv in one month.

happynear commented 7 years ago

Codes will also be released after I upload my paper on arxiv.

xingwangsfu commented 7 years ago

Good to know. Thanks.

happynear commented 7 years ago

Hi @xingwangsfu , the paper is uploaded https://arxiv.org/abs/1704.06369 and the codes are released https://github.com/happynear/NormFace . Hope it may help you.