showlab / BYOC

[IEEE-VR 2024] Bring Your Own Character: A Holistic Solution for Automatic Facial Animation Generation of Customized Characters
24 stars 3 forks source link

blendshape coefficients predicted by the model leads to broken facial expressions #4

Open P2Oileen opened 3 months ago

P2Oileen commented 3 months ago

Nice work! This work is very facinating, however, I got some problems while trying to reconstruct a talking head video.

Since I cannot find any video results related to this work, I tried to reconstruct a video by myself. However, the facial movements are quite strange in the recon video. This is what I got, in which the lips seems to be sticked:

https://github.com/user-attachments/assets/51cc034e-faab-42a2-924f-81397d38ae03

I wonder is there anything wrong in my pipeline:

  1. I used image2bs/inference.py to generate predicted_blendshape.csv predicted_blendshape.csv

  2. For those 50-dimension blendshapes, I directly changed the numbers on the right from shape_1_channel to shape_50_channel corresponding to the keys in the csv file from left to right. image

Thank you for any possible help!

JosephPai commented 2 months ago

@ChenVoid Peng, can you help identify the issue?

ChenVoid commented 2 months ago

Thank you for your support! I apologize for the delayed response. I ran the code locally and processed the video you provided, and the results appear to be correct. Could you please provide more detailed steps or operations so that I can assist you in identifying the issue?

P2Oileen commented 2 months ago

@ChenVoid Thank you a lot for your response and help! Perhaps we can try to locate whether the issue is in the generated CSV file or during the visualization process. I have provided the blendshape CSV file above, here is the link: https://github.com/user-attachments/files/16388455/predicted_blendshape.csv , which is the result I obtained using the pipeline of this repo. Could you please try visualizing it?