Closed skeletonNN closed 1 year ago
Thank you for your attention. The preprocessing code is being prepared.
Hi, I have updated the preprocessing code. Thank you for your support.
Hi, I have made improvements to the demo code for replacing wav files. Simply replace the wav file named video1_name/video1_name.wav
and the deep speech feature in video1_name/deepfeature32/video1_name.npy
. Afterward, you can test your new wav by using the pose sequences from video1_name
. For more details, please refer to this link. Enjoy~
This looks a neat update to styletalker/AVCT. Out of interests have you worked on the TED384 or tiktok models? How to test the different emotional expressions with custom video and image?
Thank you for your attention. Our work is based on FoMM or Face-vid2vid and we have not tried TED384 or Tiktok models. As for testing custom videos and images, you can refer to our demo and preprocessing code.
This work is perfect, especially for the tooth generation. So when i replace the wav? What things i will do?