I have a question about the training on your own syncnet.
As far as I know, it is difficult to converge the losses of Syncnet. So, many users have left messages in the Issues tab of the Wav2Lip GitHub repository.
Could you explain the architecture or training scheme of Syncnet of this model in more detail?
Hello :)
I have a question about the training on your own syncnet.
As far as I know, it is difficult to converge the losses of Syncnet. So, many users have left messages in the Issues tab of the Wav2Lip GitHub repository.
Could you explain the architecture or training scheme of Syncnet of this model in more detail?
Thanks! :_)