Closed coderSkyChen closed 6 years ago
I see your point, however both images are generated from the same latent vector (text, noise). The loss function for the generator is how successful it is fooling the discriminator, it has nothing to do with what image the discriminator has seen as the fake image in this mini_batch. Saying that I only did this because there was a problem with gradients early on development but I guess you can change it now, either way I would assume it won't make a difference.
If you tried this and noticed a different behavior, please tell me to modify the code or submit a pull request if you have the time. Thanks for your feedback.
Hi, When training , you generate fake image twice in a mini-batch, however according to the paper is seems like when updating D and G, they both use the same fake image, so i'm confused about it.. I think generating fake image twice may increase the instability of training.
Hoping for your reply.