Open bonseyes-admin opened 6 years ago
Hi Tim,
71pts come from annotated landmarks. In this version of evaluation code, annotated landmarks only play the role of detection rectangle to crop the face. Please refer to #8 Moreover, we will supply an evaluation code with the detection rectangle and image as the only inputs.
Best, Wayne
Thanks. However your evaluation code is not released correct? So is it possible to replicate the results of your paper without your evaluation code?
@bonseyes-admin I had replicate the results in WLFW dataset, you can try.
@wywu, "Moreover, we will supply an evaluation code with the detection rectangle and image as the only inputs", can you please give us an update on this?
I think I can make the code just by using the bounding rectangle, but I have a question: shall I undo the face rotation or will the neural net be robust enough to deal with various orientations?
@edubois Hi, we do not do face rotation when testing. Because we augment data with rotation during training, we think the model is robust to deal with most of the orientations.
@edubois For face rotation, I advise you to try PCN. This network is similar to mtcnn.
@bonseyes-admin @wywu could someone close this issue please?
How meanpose model used during training phase?
@mrgloom I think meanpose model is just for face location.
I assume that it can be used to warp from ground truth landmarks to mean pose during training, but I want to know implementation details.
@so-as
Hi,
I wanna ask some pre-process details when you replicate the results in WLFW dataset. Thx a lot!
The data augmentation is done on the original image or cropped image?
If you first detect the face then crop and resize the face image to 256*256, which face detector do you use?
can you provide a email address? I cannot find it on your website.
Thank you.
Best
Kiki
@yangyangkiki if you just want to evaluate the performance of WLFW dataset, Just use WFLW data as input, and data augmentation and croping face operation is not neccessary. my email is ge_ruimin@163.com. thank you!
@so-as Thank you for your kindly reply !
@edubois For face rotation, I advise you to try PCN. This network is similar to mtcnn.
but pcn no alignment
@wywu I found that int the evaluation process, you utilize the gt points and affine warp to calculate the bbox for the face and forwards the bbox to get the desired 98 pts. But in reality,we should first detect the bbox. I want to know how you get the bbox during trainning and for evaluation real face. thx
@bonseyes-admin I had replicate the results in WLFW dataset, you can try.
@so-as If I want to test the model, how can I generate the bounding boxes to feed into the net? The effect may be largely influenced by bounding boxes.
Hi,
The code in alignment_tools.cpp looks like it pre-processes the input to the network through an affine warp using 71pts and a meanpose model. Where do the 71pts come from?
https://github.com/wywu/LAB/blob/master/tools/alignment_tools.cpp
Thanks, Tim