soumith / ganhacks

starter from "How to Train a GAN?" at NIPS2016
11.43k stars 1.66k forks source link

Feasible loss simplification? #33

Open florian-boehm opened 6 years ago

florian-boehm commented 6 years ago

Hello, I wonder if the following simplifications lead in practice to the same result as the original loss functions:

       image

       image

       image

Thank you very much for your help!

Florian

ljuvela commented 6 years ago

That starts to look a lot like Wasserstein GAN (see e.g https://arxiv.org/abs/1704.00028). They also propose additional loss terms to limit the gradient magnitudes in D.

florian-boehm commented 6 years ago

Thank you for pointing this paper out to me. If I have understood it correctly, this one point is worth mentioning:

In case of WGAN the activation function in the last layer of the discriminator should be linear and because the output can not be interpreted as probability anymore the discriminator is then called critic.