aelnouby / Text-to-Image-Synthesis

Pytorch implementation of Generative Adversarial Text-to-Image Synthesis paper
GNU General Public License v3.0
405 stars 89 forks source link

Double about the generation of fake image in a mini-batch #5

Closed coderSkyChen closed 6 years ago

coderSkyChen commented 6 years ago

Hi, When training , you generate fake image twice in a mini-batch, however according to the paper is seems like when updating D and G, they both use the same fake image, so i'm confused about it.. I think generating fake image twice may increase the instability of training.

Hoping for your reply.

aelnouby commented 6 years ago

I see your point, however both images are generated from the same latent vector (text, noise). The loss function for the generator is how successful it is fooling the discriminator, it has nothing to do with what image the discriminator has seen as the fake image in this mini_batch. Saying that I only did this because there was a problem with gradients early on development but I guess you can change it now, either way I would assume it won't make a difference.

If you tried this and noticed a different behavior, please tell me to modify the code or submit a pull request if you have the time. Thanks for your feedback.