Rudrabha / Wav2Lip

This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020. For HD commercial model, please try out Sync Labs
https://synclabs.so
10.65k stars 2.27k forks source link

How should I set it? #235

Closed DWCTOD closed 3 years ago

DWCTOD commented 3 years ago

First of all, thanks for this awesome project. I try to retrain this model. But the result is not very well for me. Because of syncnet model, I'm confused about how long should I set the train data ( vdeo time). I found the LRS2 dataset that each video time is about 1~5s, but my dataset is not like this.What does that lead to? Can you give me a hand?Thanks , waiting for your reply.

prajwalkr commented 3 years ago

You can crop your videos to short segments, but you should have enough number of hours (LRS2 is 29 hours). The data also needs to be sync corrected.

DWCTOD commented 3 years ago

You can crop your videos to short segments, but you should have enough number of hours (LRS2 is 29 hours). The data also needs to be sync corrected.

Thanks I have try crop train data, the syncnet model is train Ok. But for wav2lip can't fitting well, maybe my train data is too little.

Rudrabha commented 3 years ago

Please re-open the issue if needed.

Quanta-of-solitude commented 2 years ago

It's been a year, I would like to know about your results, can you share? @DWCTOD :)