Open lxzyuan opened 3 years ago
Hi, can you show some visual results of predicted landmarks from our method? Maybe it can help me figure out what is the problem.
ok.
The red is GT and the black is the predicted result.
Hi, the result seems strange. Could you post reconstruction images as well?
Hi, these are reconstruction images.
The reconstruction seems reasonable. The bad performance might come from a bad pre-alignment. Our method are sensitive to the pre-alignment because we do not use data-augmentation during training.
Could you tell me which 5 landmarks do you use for the pre-alignment? In our method, we use the center of each eye, nose tip, and two corners of mouth. Alternatively, you may get 5 landmarks from other face detectors such as MTCNN or dlib.
In the AFLW2000-3D dataset, using MTCNN or dlib may not detect 5 landmarks in a large pose. Approximately 500 images could not be detected. How did you solve this problem?
In my experiment, I selected the 5 landmarks most reasonable in AFLW2000-3D pt3d_68.
In our experiment, we use an in-house 3d landmark detector which could not be made publicly available.
Regarding the 5 landmarks used for alignment, the two landmarks of eyes are not directly selected from 68 landmarks. Instead, we compute the average of landmarks around each eye (6 landmarks for each eye). Therefore, these two landmarks are not any of landmarks in pt3d_68.
Ok, thank you very much, I only calculated the average of the 2 landmarks on the left and right of the eye, which may not be very accurate. I will calculate the average of the landmarks around the eyes, thank you very much again!
Hi, if you use the average of the 2 landmarks on the left and right of the eye, they should be similar to using the average of 6 landmarks, and should not give such bad performance.
I find that the predicted landmarks are always above the GT landmarks in your posted images which is quite strange. It seems that there exists a constant shift and scale difference between GT landmarks and predicted ones. Maybe you should try to check the cropped input images and reconstruction images to see if the reconstructed faces are aligned with the input faces.
Also, you could check if the results for example images provided in this repository are reasonable.
ok, I will check it. Thank u very much~~~
Hi, can you share your code for evaluating AFLW2000-3D? I want to use your evaluation code to check my results.
Thank you~
Hi, you can follow https://github.com/XgTu/2DASL/blob/master/test_codes/benchmark_aflw2000.py to conduct the evaluation. Our evaluation code is not available yet. Sorry for that.
OK, thank you very much~
Hey, Ixzyuan ....I just want to know how you open the files with extensions ".obj" and ".mat" in output folder?
Hey, Ixzyuan ....I just want to know how you open the files with extensions ".obj" and ".mat" in output folder?
Hi, you can open .obj with meshlab, and .mat with matlab.
Hey, Ixzyuan ....I just want to know how you open the files with extensions ".obj" and ".mat" in output folder?
Hi, you can open .obj with meshlab, and .mat with matlab.
ok, thank you. I think the file is opening in windows meshlab but not in Linux.
My 5 landmarks come from GT and use pre-trained model FaceReconModel.pb, but they are far from the author's performance. why?Is there any way to improve?Thank you~