-
https://github.com/Rudrabha/Wav2Lip/blob/3baacdaf3f740256f0fc42b597b6a2eff6011b23/preprocess.py#L52
-
I am training the expert discriminator using my own dataset,but the loss is above 0.69.
I am confused whether the model can be used for ‘wav2lip_train'.
-
I am trying to train a model with my own data. I have the following directory structure:
```
Wav2Lip
|____training_data
|_______*.mp4 files
```
I've change the line in pre…
-
from torch._C import *
ImportError: DLL load failed:
first of all
pip install requirements.txt
is not working it gives error. i manually install all packages
then
i have cuda toolkit…
-
Where can I find a video tutorial for training a model? Is there a community in which users provide other models trained for use?
-
Why is that happens?
-
Hello,
I tried finetuning just the Generator. The quality of the image does increase, however the lip movements go out of sync a bit.
Can I have the trained weights for the Discriminator too to s…
-
I tried the following command : "python inference.py --checkpoint_path checkpoints/wav2lip_gan.pth --face video.mp4 --audio audio.wav" but I'm getting this error :
"Traceback (most recent call la…
-
When I trained the wav2lip model, I tried to restore the model you provided. I saw the step is around 250,000 steps. But now my model is still in training and the step is 40,000+ steps which did not …
-
Hello
This works really well, I was wondering if we could add blinks to the eyes in some way, this would create more realism in the videos generated from the single image.
Any pointers would be love…