Open EESJGong opened 11 months ago
Thanks to the author for the code. But I have doubts about the way the Lip-sync Evaluation metrics are implemented in the paper. Is there any specific implementation code for this metric?
Hey bro, you can see https://github.com/EvelynFan/FaceFormer/issues/14; But I still have a question, which vertices are the lips among the 5023 vertices in the VOCASET dataset? How do I calculate Lip-sync if I don’t know this?
Thanks to the author for the code. But I have doubts about the way the Lip-sync Evaluation metrics are implemented in the paper. Is there any specific implementation code for this metric?
Hey bro, you can see #14; But I still have a question, which vertices are the lips among the 5023 vertices in the VOCASET dataset? How do I calculate Lip-sync if I don’t know this?
Hi, have you known how to solve the problem?(which vertices are the lips among the 5023 vertices in the VOCASET dataset? How do I calculate Lip-sync if I don’t know this?). Thank you!
Thanks to the author for the code. But I have doubts about the way the Lip-sync Evaluation metrics are implemented in the paper. Is there any specific implementation code for this metric?
Hey bro, you can see #14; But I still have a question, which vertices are the lips among the 5023 vertices in the VOCASET dataset? How do I calculate Lip-sync if I don’t know this?
Hi, have you known how to solve the problem?(which vertices are the lips among the 5023 vertices in the VOCASET dataset? How do I calculate Lip-sync if I don’t know this?). Thank you!
You can refer to the CodeTalker: https://github.com/Doubiiu/CodeTalker/issues/64
Thanks to the author for the code. But I have doubts about the way the Lip-sync Evaluation metrics are implemented in the paper. Is there any specific implementation code for this metric?