wywu / LAB

[CVPR 2018] Look at Boundary: A Boundary-Aware Face Alignment Algorithm
https://wywu.github.io/projects/LAB/LAB.html
Other
1.01k stars 272 forks source link

mean error, failure rate #8

Open so-as opened 6 years ago

so-as commented 6 years ago

thank you for your excellent work. could you provide me the formula of calculate mean error and failure rate in WFLW dataset? thx

wywu commented 6 years ago

Hi,

Thanks! You can calculate the mean error with “inter-ocular” normalising factor by using https://ibug.doc.ic.ac.uk/media/uploads/competitions/compute_error.m and modify the index of outer-eye-corners. Failure rate is calculated as the percent of samples in the test set whose error is larger than 10%.

Best, Wayne

so-as commented 6 years ago

Thank you for your timely reply. I made a mistake in the formula , so that the result was quite different from the test result you published. And Failure rate = (Number of samples whose mean error > 0.1 ) / num of all samples. But I am puzzled about how the threshold 0.1 is determined? thank you very much @wywu

wywu commented 6 years ago

Hi,

We follow “DenseReg: Fully Convolutional Dense Shape Regression In-the-Wild” to use threshold 0.1 However, some literatures also use 0.08 as the threshold.

Best, Wayne

so-as commented 6 years ago

Hi, I did again as what you told. The result was similar with you. thank you so much!

so-as commented 6 years ago

when I read the code in file tools/alignment_tools.cpp,I found that the prediction process used annotated points. line 246 vector<float> label_71pt_list(71*2); for (size_t j=0; j<76; j++) { label_71pt_list[j] = label_list[i][j]; } for (size_t j=76; j<86; j++) { label_71pt_list[j] = label_list[i][j+8]; } for (size_t j=86; j<94; j++) { label_71pt_list[j] = label_list[i][j+16]; } label_71pt_list[94] = label_list[i][120]; label_71pt_list[95] = label_list[i][121]; label_71pt_list[96] = label_list[i][128]; label_71pt_list[97] = label_list[i][129]; label_71pt_list[98] = label_list[i][136]; label_71pt_list[99] = label_list[i][137]; label_71pt_list[100] = label_list[i][144]; label_71pt_list[101] = label_list[i][145]; for (size_t j=102; j<142; j++) { label_71pt_list[j] = label_list[i][j+50]; } vector<Point2f> landmark=ToPoints(label_71pt_list); This does not seem reasonable. If I know the landmark in an image, it is unnecessary to use LAB to predict the landmark. Why ? @wywu

wywu commented 6 years ago

Hi,

In this version of evaluation code, annotated landmarks only play the role of detection rectangle to crop the face. Generally, we could use the detection rectangle directly rather than annotated landmarks for cropping and there has shown hardly any performance difference for this cropping method in our experiments.

Best, Wayne

so-as commented 6 years ago

thank you for you reply. You means that I need detect the position of face in one image, then crop the face, feed into the LAB, can get landmark. So the meanpose file is not useful for me ?@wywu

so-as commented 6 years ago

Yes, I used the cropped the face as input, get very good landmark. but the time is 500ms. the cropped face size is less than 600*800. I had build caffe on cudnn and nvcc. Do you think why the time is larger than 60ms? @wywu

wywu commented 6 years ago

Hi,

We have not supplied the GPU version evaluation code by now.

Best, Wayne

so-as commented 6 years ago

So happy for you reply. I just modify the code in the alignment_tool.cpp. caffe::Caffe::set_mode(caffe::Caffe::CPU); to caffe::Caffe::SetDevice(0); caffe::Caffe::set_mode(caffe::Caffe::GPU); . Is it the GPU version code? or any other code needed modified? @wywu

guoqiangqi commented 5 years ago

Hi, I did again as what you told. The result was similar with you. thank you so much!

I have the same question about the formula of calculate mean error and failure rate in WFLW dataset? And i noticed that you have finished this work ,can you provide me the formula ,thx

CharlesShang commented 5 years ago

@so-as hi, Have you already done detection + landmark pipeline? If so, would you plz share your work? It would be very helpful! Thanks!!