-
Thank you for your great contributions to the 3D face alignment field. I am currently working on a project related to facial occlusion and would greatly benefit from the AFLW2000-3D-occlusion dataset.…
-
For benchmark evolution on AFLW2000:
1) In the Table 1 of paper, the reported numbers are evaluated using (x, y) without z ?
1> Evaluation number by your given model hints on that way
2> Th…
-
Where to obtain “../SynergyNet/aflw2000_data/eval/ALFW2000-3D_pose_3ANG_excl.npy” and '../SynergyNet/aflw2000_data/eval/ALFW2000-3D_pose_3ANG_skip.npy'
-
Validation is done in 1969 images of AFLW2000 (not 2000 in paper).
KeyKy updated
2 years ago
-
Hi!@natanielruiz, Great papers and work!
I have a question about data preprocessing, I performed well on the training set, but poorly on the test set.
Since I am training a small network, I will l…
-
My 5 landmarks come from GT and use pre-trained model FaceReconModel.pb, but they are far from the author's performance. why?Is there any way to improve?Thank you~
![image](https://user-images.github…
-
The demo file runs the camera. How to make inference on videos and images?
I want below results that was represented in your paper.
![image](https://user-images.githubusercontent.com/65652168/158…
-
=> loaded train set, 61161 images were found
Mean: 0.0000, 0.0000, 0.0000
Std: 0.0000, 0.0000, 0.0000
=> Epoch: 1 | LR 0.00025000
Not use all dataset.
Why is 0.0000?
-
In benchmark_aflw2000.py, the nme multiply with 100, what the 100 means? See line 42.
![image](https://user-images.githubusercontent.com/1982228/116190946-0f46e180-a75e-11eb-9a86-410a7352942e.png)
…
-
PRNet has different dimensions than any other model because the neck part is removed. When I evaluated PRNet and your model, I found that PRNet's has the worst NME, probably because I removed some dim…