joonson / syncnet_python

Out of time: automated lip sync in the wild
MIT License
682 stars 150 forks source link

the loss of train the Syncnet is not going down #15

Closed shz0519 closed 3 years ago

shz0519 commented 5 years ago

Hello,Thank you for the excellent work and publicly available code. I'm trying to use the mvlrs_v1 datasets to train the Syncnet,but the loss keep oscillating: 1 2

Since most video in the mvlrs_v1 is short,I randomly shift the audio up to 10-frame in order to generate synthetic false audio-video pairs. 1、Which datasets are you using?LRW/LRS2/LRS3? 2、Does the false pairs have to be shifted by up to 2s? 3、Can you show me your training log? thank you!

prajwalkr commented 4 years ago

Hello,

Were you able to solve this issue? I am facing a similar problem.

joonson commented 4 years ago

You can train the model using https://github.com/joonson/syncnet_trainer. Setting alphaI=0 will make it train the lip sync-only model.