joonson / syncnet_python

Out of time: automated lip sync in the wild
MIT License
682 stars 150 forks source link

How to use this repo? #31

Open dipam7 opened 4 years ago

dipam7 commented 4 years ago

If I am correct, after downloading the models, I have to run

python run_pipeline.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output
python run_syncnet.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output

and the sync-corrected output will be stored in output_dir/pyavi/video.avi

Is this correct?

joonson commented 4 years ago

No, the output is not sync-corrected. It just gives you an offset and active speaker detection labels.

dipam7 commented 4 years ago

How do I use this repo to sync correct my videos?

dipam7 commented 4 years ago

Also the offsets.txt is not getting generated for me as mentioned in the README. Just 4 pckl files inside output_dir/pywork/reference_name/

houdajun commented 3 years ago

what is the --reference name_of_video? I only have a clip of video for sync, I am not sure what to specify for this input.

houdajun commented 3 years ago

No, the output is not sync-corrected. It just gives you an offset and active speaker detection labels.

Do we need to sync it manually using the AV offset? How can we do it?