-
How can we create a real time Wav2Lip? For example from a wav file or live mic audio or TTS? Is it feasible using Wav2Lip? If yes, please provide the script. This feature can be very useful. We provid…
-
-
Hello, I trained Syncnet and wav2lip to reduce the loss to between 0.25-0.3, but after actual inference, I found that the lip shape of the character is not moving. May I ask what is the reason for thi…
-
![eyes](https://user-images.githubusercontent.com/10509740/142052862-d0996feb-e927-4391-956c-641545802836.PNG)
Using wav2lip
No matter my source video, the eyes are blocky / pixelated / over sha…
ill13 updated
4 months ago
-
First of all, I would like to extend my sincere thanks for providing such an excellent project. It has been incredibly useful and impressive.
I have encountered an issue while using the project. Ev…
-
Colab Notebook example cannot be run
error message:
`Using cuda for inference.
Reading video frames...
Number of frames available for inference: 210
Traceback (most recent call last):
File "/c…
-
Thank you for your excellent work.
May I ask what indicator represents when I can manually end the training of hq_wav2lip_sam_train?
Or will the training process end automatically?
-
Your work is absolutely amazing, I love it. The problem is 8b models are too much limited. Would it be possible to include a possibility to connect with any online models with OpenRouter for example ?…
-
Hello,
Can you please share some sample output video, especially for wav2lip comparison?
Thanks
-
python3: can't open file '/home/test_wav2lip/Wav2Lip-HD/Real-ESRGAN/inference_realesrgan.py': [Errno 2] No such file or directory