-
Hi! Thanks for sharing great works!
- DiffSpeaker diffusion model outputs vertex corresponding to audio at timestep `t`.
- Then is there any way extract expression and other FLAME parameters from …
-
For instance, given a mesh of metahuman face mesh, does it able to driven?
-
-
## Symptoms
Training VOCASET with the supplied [wav2vec2](https://github.com/theEricMa/DiffSpeaker/blob/main/scripts/diffusion/vocaset_training/diffspeaker_wav2vec2_vocaset.sh) script produces a stat…
-
Thank you for your great work, I would like to ask how to convert GT pose from BIWI dataset to RGB image and make biwi_test.csv file based on ARKitFace dataset? Looking forward to your reply.
-
-
Hi, thank you for sharing the code. I could not find the function definitions for `vocaset_upper_face_variance` and `vocaset_mouth_distance` that are imported at line 153 in `alm.models.modeltype.dif…
-
Hi.
I evaluated your model on the BIWI using the code you provided.
Used GT mesh (from obj file)
Used pred bbox (from FAN that you referenced in your paper)
Used your code for the rotation, tran…
-
When I preprocess the data ,I get this
Not found: BIWI_Process/data/F1/vert/e20/
Not found: BIWI_Process/data/F3/vert/e02/
Not found: BIWI_Process/data/F3/vert/e03/
Not found: BIWI_Process/data/…
-
Thanks again for the code.
I am one more question about pre-processing the BIWI dataset.
I have requested and obtained the download link of BIWI, which looks like this
![スクリーンショット 2023-11-09 15…