-
Thank you for the intriguing project!
In the video samples you put in the samples folder, the face hardly moves. Just changing expressions.
But when I select a video with speech, the result comes …
-
Training progress: 4%|####2 | 1999/50000 [00:10
-
https://huggingface.co/spaces/camenduru-com/one-shot-talking-face
-
This article presents some claims and methods, prompting me to write a comment to express some of my doubts.
**1. Is this the first text-guided 2D-based talking face generation framework?**
There …
-
Hello!
I tried to generate the interpolation video with the pre-trained CelebA-HQ network and found that the reference.jpg that is saved before the video generation step does not feature style-mix…
-
### Expected behavior of the wanted feature
Hi, I would like to propose a handy feature to make various effects more tweakable by the user. Effects like glsl shader, lavfi filters, build in effects…
-
Working to align code to VASA white paper
https://github.com/johndpope/VASA-1-hack/blob/main/Net.py
I cherry picked some code from here - which I believe builds off the MegaPortraits stuff
htt…
-
Great work in the field of talking face generation. When will the code be made public?
-
Just wondering if there is any hope of having this project be used to create talking avatar that is audio driven. Im having fun with this proect but it would be nice to have talking heads
-
I use nearly 4w high quality talking head data to train wav2lip288x288. I found that during training, the generated bottom half face always blur. I try to use GAN loss and perceptual loss, but it does…