Closed breadbread1984 closed 6 years ago
This is because we use a "PatchGAN" discriminator, described in Section 3.2.2 of this paper. See more details here: https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/39
I have a question on the implement of PatchGAN. In the Section 3.2.2 of the paper motioned above, it said
We run this discriminator convolutationally across the image, averaging all responses to provide the ultimate output of D.
But I couldn't find anywhere to implement of this averaging operation? In the code, the D outputs the 30x30x1, and the loss of D is calculated by the GANLoss function(MSE or BCE loss). So, where does the averaging operation? Maybe I miss something. Thank you !
Hi, @knaffe and @phillipi . I am confused for using different loss function. The option is MSELoss() and BCEWithLogits(), but when I used BCE, my generator loss shows a very large oscillation and MSE with more steady around 0.5 In some answers people mention that https://stats.stackexchange.com/questions/242907/why-use-binary-cross-entropy-for-generator-in-adversarial-networks sigmoid with bce calculate the probability for the Discriminator to set loss. Thank you!
As far as I understand, the output of the discriminator outputs a scalar which represents whether the input comes from natural image set or generated image set. But I find that your discriminator outputs a 32x32 tensor. Could you explain the reason of that? Thx