igul222 / improved_wgan_training

Code for reproducing experiments in "Improved Training of Wasserstein GANs"
MIT License
2.35k stars 668 forks source link

Critic loss curve #90

Open CBD88 opened 4 years ago

CBD88 commented 4 years ago

Hi, (1) Critic loss curve which should go to 0 will be including gradient penalty or without it? (2) What should be the behavior of gradient penalty(Decreasing towards 0 or something else)? (3) The result will be the same if we do backward propagation of gradient penalty individual or with discriminator loss as below. (i) gradient_penalty.backward(retain_graph=True) [ Individual ] (ii) loss_D = (- loss_real + loss_fake) + gradient_penalty [ with discriminator loss ] loss_D.backward()