osh / KerasGAN

A couple of simple GANs in Keras
501 stars 177 forks source link

Woking of GANs #7

Open erilyth opened 7 years ago

erilyth commented 7 years ago

I'm a little new to Generative Adversarial Networks and was wondering why the samples with iPython notebook2 are worse compared to the iPython notebook1. Another question I had was, when we are training the generator, shouldn't we train it in the opposite manner of how we train the discriminator? (ie. use opposite output labels when we train the generator whereas use the correct labels when we train the discriminator). Thanks!

engharat commented 7 years ago

As far as I understand, the correct labels are needed for training the generator too, because in order to update the weights in a meaningful way the generator needs the loss (that will be backpropagated trought the GAN in order to update generator weights), and the loss is gained by putting output of generator to input of discriminator, then in output of discriminator you will have a value, then you have your loss that you can backpropagate. In order to do that, we need correct labels! This is what I've understood from the whole training process

erilyth commented 7 years ago

@engharat The discriminator should become good at identifying the original images from the fake images, whereas the generator should learn to fool the discriminator, ie. Make it output false outputs. When we train the discriminator alone, we would be using the right labels but when we train the generator (and freeze discriminator weights), I think it ideally we would be training it with the opposite labels since that would help it modify its weights such that based on the current discriminator, it would generate the most incorrect results (ie. it gets fooled well).

vforvinay commented 7 years ago

@erilyth See this line. Here, before training the GAN as a whole, we assign the output labels as all 1s in the 1 column, which is a label of all real. So yes, when training the GAN, we the opposite label.