-
Hello,
I tried finetuning just the Generator. The quality of the image does increase, however the lip movements go out of sync a bit.
Can I have the trained weights for the Discriminator too to s…
-
I tried the following command : "python inference.py --checkpoint_path checkpoints/wav2lip_gan.pth --face video.mp4 --audio audio.wav" but I'm getting this error :
"Traceback (most recent call la…
-
When I trained the wav2lip model, I tried to restore the model you provided. I saw the step is around 250,000 steps. But now my model is still in training and the step is 40,000+ steps which did not …
-
Hello
This works really well, I was wondering if we could add blinks to the eyes in some way, this would create more realism in the videos generated from the single image.
Any pointers would be love…
-
**Describe the bug**
While building the compressed mobile version of my model only 67% of the tests succeed. Skipping tests seems to produce no result.
**To Reproduce**
"onnxruntime/build.sh" --c…
-
Using the example command for inference with pretrained models in the README, I get a ModuleNotFoundError, saying: `No module named 'numba.decorators'`, even though I installed all of the requirements…
-
My first few tries where on some random videos and I noticed that the Mouth not always placed on the correct spot.
So I did few experiments trying to replicate the issue before I post it here.
##
I…
-
I have done all the preprocessing and started the training but after 37 epoch `Real:` is 0 is this normal? or I have done something wrong
```
Starting Epoch: 31
L1: 0.1545860916376114, Sync: 0.0, …
-
`face_detect()` is by far the slowest part of `inference.py`, and it also buffers all its results. This means getting results is slow and RAM is wasted. You could make `face_detect()` a generator inst…
kousu updated
4 years ago
-
Hi, my first time here. Didn't find anywhere else to post, so sorry if this is the wrong place.
I just discovered Wav2Lip and I love it. And I know the pre-learned model thingy we get to use was ma…