primepake / wav2lip_288x288

MIT License
530 stars 136 forks source link

train wav2lip_288x288/wloss_hq_wav2lip_train.py los #82

Closed SuperGoodGame closed 7 months ago

SuperGoodGame commented 7 months ago

when i use it to train ,some loss becomes negative number,i am sure i use the correct dataset which as same as i train the sync_expertNet,and my sync_experNet loss reach about 0.25. and then the loss become zero. Thank you very much if you could give me some advice

SuperGoodGame commented 7 months ago

L1: 0.15509054481983184, Sync: 0.0, Percep: -0.13693829834461213 | Fake: 0.13693829834461213, Real: -0.13740962297068787: : 25it [02:03, 2.35s/it]wandb: Network error (TransientError), entering retry loop. L1: 0.08430848828068487, Sync: 0.0, Percep: -0.006339736034472784 | Fake: 0.006339736034472784, Real: -0.0063615566190133276: : 540it [22:14, 2.47s/it]
Starting Epoch: 1 L1: 0.0622877272275778, Sync: 0.0, Percep: 0.0 | Fake: 0.0, Real: 0.0: : 78it [03:25, 2.30s/it]

SuperGoodGame commented 7 months ago

i use the model with more than one nvidia 3090

ghost commented 7 months ago

the most reasons related to this maybe about your dataset