Closed xiaoyun4 closed 6 years ago
Hey @xiaoyun4 ,
Glad to answer the question. So, there are three steps happening in the preprocessing.
I hope this gives you some details to work on.
@karanvivekbhargava Thanks for your answer. From your code, I figure out that for the step 2 you used eyes keypoints to get the horizontal line instead of the nose keypoints. Is that what you intended to say?
I had experimented with both and it seems that I used the eyes instead of the nose. @hanezu that seems to be spot on.
@hanezu @karanvivekbhargava Where in code can I see the implementation of rotation normalization? I can only find that in run.py, it loads a pre-generated tilt angle and use it to de-normalize the predicted landmarks...
I am interested in your project and read your paper. In your paper, you process the keypoints to be independent on the face location, face size, in-plane and out-of-plane face rotation. For example, you mean-normalization the 68 keypoints, project the keypoints into a horizontal axis and divide the keypoints by the norm of the 68 vectors. It is important for this processing, but the explanation is brief in your paper. Could you explain this process including the formulas and methods in detail?
Thanks a lot