Closed zhuqunxi closed 6 years ago
I guess it depends on your input/output image sizes.
As a beginner in the field of Deep Learning, it is my great honor to receive your response. But I still have trouble in the default sizes of input and/or output images in the discriminator. The input image size is (1, 70,70, 3) through resizing the original image with size (1,256,256,3). Then it is fed into the discriminator model: C64-C128-C256-C512, as you illustrated in your pape, resulting in an output image with the size (1,6,6,1). I am thereby a little bit confused that why it is not a probability value/scalar but a matrix/tensor?
The input image size is 256x256. The output of 70x70 patch Discriminator is around 30x30. It is a matrix.
five layer padding: 1,1,1,1,1
stride: 2,2,2,1,1