-
Hi,
I have experimenting with the code, but regardless of what I try with the expert lip-sync discriminator, the loss does not seem to go below 0.68. I am using the LRS2 (main) dataset. For some re…
-
LOG
select -r POLYWINK_Louise ;
import auto_lip_sync
auto_lip_sync.start()
b'Setting up corpus information...\r\n'
b'Number of speakers in corpus: 1, average number of utterances per speaker: 1…
-
For one of the startup participants we need a lip sync pipeline they want to use for their app. There is no ready to use hugginface pipeline so we should create a custom one. There are several lip syn…
-
I have lip sync visemes AEIU and I want to enable them when I have iphone tracking sync enabled. Only them without hybrid with iphone tracking.
I've tried different variations but still couldn't se…
-
how to solve:RuntimeError: Unable to open C:\Python-Project\数字人lip\SadTalker-Video-Lip-Sync\checkpoints\shape_predictor_68_face_landmarks.dat
fssyy updated
8 months ago
-
Hello, I recently came across this repo while looking for Suno API access and was very happy to find this. I wanted to use Suno to have my robots generate songs based off a prompt you speak to them. I…
-
In [this demo](https://github.com/magic-research/magic-animate/blob/main/assets/teaser/t3.gif), we can see the girl moving her mouth "lip syncing".
However, since the Densepose does not contain any…
-
Thank you all for your interest in our open-source work MuseTalk.
We have observed that the training codes hold significant value for our community. With this in mind, we are pleased to share an in…
-
I wonder if this technique could be applied to syncing facial animations with audio which could potentially adapt to whispering vs shouting.
-
**Describe the bug**
Audio/Video not sync, there is a ~200-300ms delay between.
**To Reproduce**
Steps to reproduce the behavior:
tried 2.78 - delay
tried 3.0.0 alpha 6 - no delay
tried differ…