NetEase-GameAI / Face2FaceRHO

The Official PyTorch Implementation for Face2Face^ρ (ECCV2022)
BSD 3-Clause "New" or "Revised" License
213 stars 35 forks source link

The given DECA headposes points don't match #15

Closed ligenjie closed 2 years ago

ligenjie commented 2 years ago

The offical DECA model outputs 68 2D points from one face.Your own 3DMM has 72 2D points.I think data/landmark_embedding.json is your model point-choose projection...How can I project DECA offical 68 points to 72points.I run the test_case/source.jpg with offical DECA model and then compare the offical 68 points and your project 72 points of 'source.jpg'. I can't find the projection of this two files.

NetEase-GameAI commented 2 years ago

Yes. The landmark_embedding.json is our model point-choose projection. You can't directly project the original 68 points to our 72 points, because all the 72 points are now re-selected on the model. We re-write the vertices2landmarks function in the FLAME.py, and get 72 points based on landmark_embedding.json.

ligenjie commented 2 years ago

Yep! I get that. But I have another problem.I run the fitting.py several times using the same pic(test_case/source.jpg) by your DECA model.Every time I get different headposes and landmark values.If there any random causes the differences?

NetEase-GameAI commented 2 years ago

emmmm, there is no random in the fitting process. Perhaps you use the JPEG image, the decoder of JPEG may be different between different packages, try using a png image.

ligenjie commented 2 years ago

Thank u~