wywu / LAB

[CVPR 2018] Look at Boundary: A Boundary-Aware Face Alignment Algorithm
https://wywu.github.io/projects/LAB/LAB.html
Other
1.01k stars 272 forks source link

Preprocessing alignment using 71pts #9

Open bonseyes-admin opened 6 years ago

bonseyes-admin commented 6 years ago

Hi,

The code in alignment_tools.cpp looks like it pre-processes the input to the network through an affine warp using 71pts and a meanpose model. Where do the 71pts come from?

https://github.com/wywu/LAB/blob/master/tools/alignment_tools.cpp

Thanks, Tim

wywu commented 6 years ago

Hi Tim,

71pts come from annotated landmarks. In this version of evaluation code, annotated landmarks only play the role of detection rectangle to crop the face. Please refer to #8 Moreover, we will supply an evaluation code with the detection rectangle and image as the only inputs.

Best, Wayne

bonseyes-admin commented 6 years ago

Thanks. However your evaluation code is not released correct? So is it possible to replicate the results of your paper without your evaluation code?

so-as commented 6 years ago

@bonseyes-admin I had replicate the results in WLFW dataset, you can try.

edubois commented 6 years ago

@wywu, "Moreover, we will supply an evaluation code with the detection rectangle and image as the only inputs", can you please give us an update on this?

I think I can make the code just by using the bounding rectangle, but I have a question: shall I undo the face rotation or will the neural net be robust enough to deal with various orientations?

wywu commented 6 years ago

@edubois Hi, we do not do face rotation when testing. Because we augment data with rotation during training, we think the model is robust to deal with most of the orientations.

HansRen1024 commented 6 years ago

@edubois For face rotation, I advise you to try PCN. This network is similar to mtcnn.

HansRen1024 commented 6 years ago

@bonseyes-admin @wywu could someone close this issue please?

mrgloom commented 5 years ago

How meanpose model used during training phase?

so-as commented 5 years ago

@mrgloom I think meanpose model is just for face location.

mrgloom commented 5 years ago

I assume that it can be used to warp from ground truth landmarks to mean pose during training, but I want to know implementation details.

yangyangkiki commented 5 years ago

@so-as

Hi,

I wanna ask some pre-process details when you replicate the results in WLFW dataset. Thx a lot!

  1. The data augmentation is done on the original image or cropped image?

  2. If you first detect the face then crop and resize the face image to 256*256, which face detector do you use?

can you provide a email address? I cannot find it on your website.

Thank you.

Best

Kiki

so-as commented 5 years ago

@yangyangkiki if you just want to evaluate the performance of WLFW dataset, Just use WFLW data as input, and data augmentation and croping face operation is not neccessary. my email is ge_ruimin@163.com. thank you!

yangyangkiki commented 5 years ago

@so-as Thank you for your kindly reply !

jiagh2010 commented 5 years ago

@edubois For face rotation, I advise you to try PCN. This network is similar to mtcnn.

but pcn no alignment

ezone1987 commented 5 years ago

@wywu I found that int the evaluation process, you utilize the gt points and affine warp to calculate the bbox for the face and forwards the bbox to get the desired 98 pts. But in reality,we should first detect the bbox. I want to know how you get the bbox during trainning and for evaluation real face. thx

ezone1987 commented 5 years ago

@bonseyes-admin I had replicate the results in WLFW dataset, you can try.

@so-as If I want to test the model, how can I generate the bounding boxes to feed into the net? The effect may be largely influenced by bounding boxes.