Closed QUTGXX closed 3 years ago
The training loss is nearly 0.02 and the eval loss is about 0.4.
In our case, the training and eval loss is almost the same. What dataset are you training on?
The training loss is nearly 0.02 and the eval loss is about 0.4.
In our case, the training and eval loss is almost the same. What dataset are you training on?
The dataset is made by myself. Sorry, I made a mistake. The sync loss is 0.4, not the eval loss.
Have you trained the expert discriminator on your own dataset? There are also things to consider like sync-correcting your dataset and the FPS of the videos as well.
Have you trained the expert discriminator on your own dataset? There are also things to consider like sync-correcting your dataset and the FPS of the videos as well.
Yes, I did it. The eval loss of expert discriminator is around 0.4 when I trained this model with 30,000steps+ and the training loss is nearly 0.2. And the FPS of the videos is 25.
Please read this first: https://github.com/Rudrabha/Wav2Lip#training-on-datasets-other-than-lrs2
@QUTGXX how do contact you?
@QUTGXX how do contact you?
Emm. What do you want to ask? If you want, just contact me by zszsgxx@gmail.com.
When I trained the wav2lip model, I tried to restore the model you provided. I saw the step is around 250,000 steps. But now my model is still in training and the step is 40,000+ steps which did not restore the model you provided. The training loss is nearly 0.02 and the sync loss is about 0.4. Then I tried to utilize the latest model I saved to run the 'inference.py', and the result is not good. The fusion and the lip seem not good. Should I continue training?