-
i use hdtf for wav2lip288 training,nearly 170 0000 pictures ,16hours
my syncnet eval loss is 0.3
and my L1 eval loss is currently 0.019051536196276822 and my sync eval loss is 0.18783933469512834…
-
/content/SadTalker-Video-Lip-Sync
Traceback (most recent call last):
File "inference.py", line 8, in
from src.generate_batch import get_data
File "/content/SadTalker-Video-Lip-Sync/src/ge…
-
We currently do nothing with morphs, which means lip sync won't work, among other things.
-
Thank you for open-sourcing such a great project. I read your training method carefully follow https://github.com/Rudrabha/Wav2Lip#training-on-datasets-other-than-lrs2. first,
I trained the expert …
-
# Issue
Filter output stream has lip sync issue without ABR
-
**python -V
python 3.10.10**
python speech_changer.py 2.wav els.mp4 -o out.mp4
E:\Program\python\python3.10.10\lib\site-packages\torchvision\transforms\functional_tensor.py:5: UserWarning: The…
-
**Hi, this is an amazing node**, so thank you for it!
I did all the steps in your YouTube video to install Cuda.
I'm having a problem working with GPU in the node, so it is very slow.
I have I9 com…
-
- clone project
- make clone https://github.com/DanielSWolf/rhubarb-lip-sync/ and rename **rhubarb-lip-sync** to **rhubarb**
- add speech.txt and speech.wav file into cloned project
- run **npm …
-
## Describe the bug
When exporting to VRM 0.x, T-pose transformation is not applied to shapekeys; if an avatar with A-pose resting pose is selected for export, viseme animations will affect arms.
…
-
Thanks for the great work; do you know how we can run audio-driven reenactment but have the input be a video instead of an image? (i.e. sync the lips in that video)