huangyangyu / SeqFace

SeqFace : Making full use of sequence information for face recognition
https://arxiv.org/pdf/1803.06524.pdf
MIT License
129 stars 32 forks source link

Use spherefacce code test your model #2

Closed deepage closed 6 years ago

deepage commented 6 years ago

Thank your share the nice job. I have download your res-27 model,and test it on LFW,but only get 99.48%,maybe something wrong when I align photos. I use MTCNN detect all photos,and align it with this coord5point = [ 46.29460144, 59.69630051; 81.53179932, 59.50139999; 64.02519989, 79.73660278; 49.54930115, 100.3655014 ; 78.72990417, 100.20410156]; then crop to be 128x128. Am I right? Or problem is here? At the end,I use evaluation code in sphereface,and can get accuracy 99.48%(with image flip),without image flip it can get 99.43%. Can you give me some advise about how should I do? BTW,I also try your python code norml2_sim,but can get the same result. Thanks very much!

huangyangyu commented 6 years ago

I guess the alignment method may be different. How do you align face through the key points? We share the alignment method in util.py file. You can try our test script, which using aligned faces.

thuhuwei commented 6 years ago

The alignment is introduced in https://github.com/AlfredXiangWu/face_verification_experiment , we use the same method (but with RGB images).