-
### Describe the bug?
Attempting to initialize SRanipal facial tracking with Vive Wireless is at best unstable. The culprit for this lies in the SRanipal initialization function, it returns an unhand…
-
**python -V
python 3.10.10**
python speech_changer.py 2.wav els.mp4 -o out.mp4
E:\Program\python\python3.10.10\lib\site-packages\torchvision\transforms\functional_tensor.py:5: UserWarning: The…
-
- clone project
- make clone https://github.com/DanielSWolf/rhubarb-lip-sync/ and rename **rhubarb-lip-sync** to **rhubarb**
- add speech.txt and speech.wav file into cloned project
- run **npm …
-
Hello All,
Approx. how much of the original code has been converted into C#? What will the user experience from the conversion ?
The system req. will be lower and the speed will be faster?
-
I’m working on a project to lip-sync videos in French using the wav2lip model. In every result, the software seems to have created a third lip to the mouth that appears every 2 seconds. Maybe it is be…
-
Thanks for the great work; do you know how we can run audio-driven reenactment but have the input be a video instead of an image? (i.e. sync the lips in that video)
-
Randomizing the lip sync files could result in some strange and fun results, and shouldn't be too difficult with the feature being implemented in KIO.
-
Dear author, thanks for your wonderful effort in creating this repo.
I've pushed this model on the Replicate website so that people can try it out easily.
Here is the model link on Replicate: [h…
-
## Describe the bug
When exporting to VRM 0.x, T-pose transformation is not applied to shapekeys; if an avatar with A-pose resting pose is selected for export, viseme animations will affect arms.
…
-
Hi,
I trained the model, then:
```
python test.py --pose data/obama.json --ckpt pretrained/obama_eo.pth --aud data/intro_eo.npy --workspace trial_obama/ -O --torso
```
With new random audio, th…