affinelayer / pix2pix-tensorflow

Tensorflow port of Image-to-Image Translation with Conditional Adversarial Nets https://phillipi.github.io/pix2pix/
MIT License
5.07k stars 1.3k forks source link

I need some insight about the training process #159

Open FranciscoGomez90 opened 5 years ago

FranciscoGomez90 commented 5 years ago

Hi there, I am relatively new to the GAN world, but quite experienced with deep learning in general. Although I have read a lot about GANs, and specifically about CycleGans, there are some details that I cannot properly grasp. I hope some of you could please provide some insight about the following question. About the training process, AFAIK the "discriminator" is fed with real and fake images. Is this done separately? I mean, first we feed the network with real data and compute the discriminator real loss, and then we do the same with fake images in order to compute the discriminator fake loss. Am I right so far? Is the same instance of the discriminator used for real and fake data (same weights)? Are the weights updated separately for real and then for fake?

Could you please explain how the losses are computed and what are the actual input and output of the discriminator?

Thank you.