Thanks for your great work, I have ran test.py to get the synthesized normalization face from the IJB-A dataset, and verify the recognition perofrmance on the corresponding protocol. However, I found that the performance degrades over 7% in Rank-1 identification when I sent the normalized face into recognition model.
Here's the stepbystep list I have done.
(1) I have use MTCNN's 5-point facial landmark model to get points from input image, and rotate two eye points horizontally. Meanwhile, I set the distance between the midpoint of eyes and the midpoint of mouth with 90 pixels, and crop it into 250x250.
(2) The cropped faces have sent to the face normalization model you provided, and get the normalized faces.
(3) Resized the normalized faces to 144x144 and converted to gray-scale.
(4) Send the normalized faces to Light CNN-29 v2 model to get the facial representation. The Light-CNN model I get is from 'https://github.com/AlfredXiangWu/LightCNN'
(5) Evaluate on IJB-A protocol. (The Light-CNN model only has slight different from paper)
Do you have any suggetion ? I would really appreciate any help.
Besides, do you re-train the Light-CNN model? and which model you have used? could you share the corresponding locations in output image for finding the transformation matrix? Thank you for your attention to this matter, and I look forward to hearing from you.
Hello,I want to test the model in IJB-A dataset ,but I have not aligned picture , would you please share your code for processing IJB-A? Tank you very much!!
Thanks for your great work, I have ran test.py to get the synthesized normalization face from the IJB-A dataset, and verify the recognition perofrmance on the corresponding protocol. However, I found that the performance degrades over 7% in Rank-1 identification when I sent the normalized face into recognition model.
Here's the stepbystep list I have done. (1) I have use MTCNN's 5-point facial landmark model to get points from input image, and rotate two eye points horizontally. Meanwhile, I set the distance between the midpoint of eyes and the midpoint of mouth with 90 pixels, and crop it into 250x250. (2) The cropped faces have sent to the face normalization model you provided, and get the normalized faces. (3) Resized the normalized faces to 144x144 and converted to gray-scale. (4) Send the normalized faces to Light CNN-29 v2 model to get the facial representation. The Light-CNN model I get is from 'https://github.com/AlfredXiangWu/LightCNN' (5) Evaluate on IJB-A protocol. (The Light-CNN model only has slight different from paper)
Do you have any suggetion ? I would really appreciate any help. Besides, do you re-train the Light-CNN model? and which model you have used? could you share the corresponding locations in output image for finding the transformation matrix? Thank you for your attention to this matter, and I look forward to hearing from you.