First, thank you for your sharing. And hello for all users reading this issue. Can you guys give me some help on following quetions? It may take you some times.
Here are my questions:
In your codes, you only set the discriminator (or the critic) as non-trainable before combining with the generator, and dosent make it trainable before run the training code. Is that neccessary for setting discriminator.trainabel = False and generator.trainable = True before training the combined model? And same question in training discriminator/critic, should I set discriminator.trainable = True and generator.trainable = False?
If I do so, will it leads me to get wrong results? or just dosent matter?
Your code train the discriminator with two steps: train_on_batch with real samples, and train_on_batch with fake samples. Can I mixed the samples with X = np.concatenate(real_samples, fake_samples) and labels with Y = np.concatenate(real_labels, fake_labels), and then train_on_batch(X, Y) to get d_loss? Can this get the same results with your codes? Or what's the problem if I was wrong.
Further, I tried to build a discriminator training model by:
Can I train the discriminator with this codes? Because I noticed that my model got very bad results (unclear images were generated after 100K+ iterations), but I'm not sure if these changes make the bad results
Thank you for you time, and I'm really appreciate that.
First, thank you for your sharing. And hello for all users reading this issue. Can you guys give me some help on following quetions? It may take you some times.
Here are my questions:
In your codes, you only set the discriminator (or the critic) as non-trainable before combining with the generator, and dosent make it trainable before run the training code. Is that neccessary for setting discriminator.trainabel = False and generator.trainable = True before training the combined model? And same question in training discriminator/critic, should I set discriminator.trainable = True and generator.trainable = False?
If I do so, will it leads me to get wrong results? or just dosent matter?
Your code train the discriminator with two steps: train_on_batch with real samples, and train_on_batch with fake samples. Can I mixed the samples with X = np.concatenate(real_samples, fake_samples) and labels with Y = np.concatenate(real_labels, fake_labels), and then train_on_batch(X, Y) to get d_loss? Can this get the same results with your codes? Or what's the problem if I was wrong.
Further, I tried to build a discriminator training model by:
x_inputs = Input(shape=(64, 64, 3)) z_inputs = Input(shape=(100, ))
discriminator.trainable = True generator.trainable = False
x_fake = generator(z_inputs) x_real_score = discriminator(x_inputs) x_fake_score = discriminator(x_fake)
d_train_model = Model([x_inputs, z_inputs], [x_real_score, x_fake_score])
d_train_model.compile(......)
Can I train the discriminator with this codes? Because I noticed that my model got very bad results (unclear images were generated after 100K+ iterations), but I'm not sure if these changes make the bad results
Thank you for you time, and I'm really appreciate that.