taki0112 / UGATIT

Official Tensorflow implementation of U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation (ICLR 2020)
MIT License
6.17k stars 1.05k forks source link

What should the losses look like when training is in a good direction? #101

Open gdwei opened 3 years ago

gdwei commented 3 years ago

Hi, guys, I do not have much experience about training a GAN, so it would be appreicated if some of you can tell me how should the losses change when the network is normally trained. It can definetly save me a lot of time until waiting for the training end, and stop it at the early stage when I can judge that it would not give me good results.

Thanks in advance.

######## Currently in my training, the generator loss slowly goes down from around 1500 to about 50, but the discrimintor loss does not change a lot, it keep jittering aound.

Hope someone can give me some directions for improving the training. Modify the weights for different losses? or change the network?

image image Y axis: G_loss or D_loss X axis: # of steps/iters