roatienza / Deep-Learning-Experiments

Videos, notes and experiments to understand deep learning
MIT License
1.11k stars 764 forks source link

dcgan loss #10

Open eyaler opened 6 years ago

eyaler commented 6 years ago

in https://github.com/roatienza/Deep-Learning-Experiments/blob/master/Experiments/Tensorflow/GAN/dcgan_mnist.py you compute the generator loss as: a_loss = self.adversarial.train_on_batch(noise, y) but this also trains the discriminator using only the fake samples. shouldn't you freeze the discriminator weights for this part?

harshtikuu commented 6 years ago

@eyaler exactly my doubt

hmaon commented 5 years ago

yeah... you can change self.AM.add(self.discriminato() in adversarial_model() to this:

        dc = self.discriminator()
        for layer in dc.layers:  layer.trainable = False
        self.AM.add(dc)

You'll get a warning but the discriminator will be frozen for a_loss = self.adversarial.train_on_batch(noise, y)

I verified the change with this instrumentation code:

            print("before adversarial.train " + str(keras.backend.eval(self.adversarial.layers[1].layers[0].weights[0][0][0][0][0])))
            a_loss = self.adversarial.train_on_batch(noise, y)
            print("after  adversarial.train " + str(keras.backend.eval(self.adversarial.layers[1].layers[0].weights[0][0][0][0][0])))
elk-cloner commented 5 years ago

in https://github.com/roatienza/Deep-Learning-Experiments/blob/master/Experiments/Tensorflow/GAN/dcgan_mnist.py you compute the generator loss as: a_loss = self.adversarial.train_on_batch(noise, y) but this also trains the discriminator using only the fake samples. shouldn't you freeze the discriminator weights for this part?

you're right. we should freeze discriminator