Closed FuouM closed 3 months ago
Thanks for your work. can you explain what the benefit of this change?
Sometimes the video maybe 60fps or 25fps, which makes the wav2lip out of sync if you don't force the framerate of the input video to 30fps. The change does not affect any existing workflows and makes the node more robust.
Sometimes the video maybe 60fps or 25fps, which makes the wav2lip out of sync if you don't force the framerate of the input video to 30fps. The change does not affect any existing workflows and makes the node more robust.
The original code should detect the frame rate automatically
Actually not, as I've tried, mel_idx_multiplier = 80./30
this line splits the mel spectrogram to 30
indices, always. This is presumably the "framerate" at which the lipsyncing is done. Also, since the input images
is just a sequence of images, they do not hold any fps information, so you would need the additional input for the fps.
Actually not, as I've tried,
mel_idx_multiplier = 80./30
this line splits the mel spectrogram to30
indices, always. This is presumably the "framerate" at which the lipsyncing is done. Also, since the inputimages
is just a sequence of images, they do not hold any fps information, so you would need the additional input for the fps.
Thanks. I will merge it!
Default is 30 as indicated in wav2lip_node.py
mel_idx_multiplier = 80./30