junyanz / pytorch-CycleGAN-and-pix2pix

Image-to-Image Translation in PyTorch
Other
23.08k stars 6.32k forks source link

Question: PatchGAN Discriminator #39

Closed johnkorn closed 7 years ago

johnkorn commented 7 years ago

Hi there. I was investigating your CycleGAN paper and code. And looks like discriminator you've implemented is just a conv net, not a patchgan that was mentioned in the paper. Maybe I've missed something. Could you point me where the processing of 70x70 patches is going on. Thanks in advance!

phillipi commented 7 years ago

In fact, a "PatchGAN" is just a convnet! Or you could say all convnets are patchnets: the power of convnets is that they process each image patch identically and independently, which makes things very cheap (# params, time, memory), and, amazingly, turns out to work.

The difference between a PatchGAN and regular GAN discriminator is that rather the regular GAN maps from a 256x256 image to a single scalar output, which signifies "real" or "fake", whereas the PatchGAN maps from 256x256 to an NxN array of outputs X, where each X_ij signifies whether the patch ij in the image is real or fake. Which is patch ij in the input? Well, output X_ij is just a neuron in a convnet, and we can trace back its receptive field to see which input pixels it is sensitive to. In the CycleGAN architecture, the receptive fields of the discriminator turn out to be 70x70 patches in the input image!

This is all mathematically equivalent to if we had manually chopped up the image into 70x70 overlapping patches, run a regular discriminator over each patch, and averaged the results.

Maybe it would have been better if we called it a "Fully Convolutional GAN" like in FCNs... it's the same idea :)

taki0112 commented 7 years ago

Can you tell me which line in the code represents patchGAN?

phillipi commented 7 years ago

Edit: see defineD

taki0112 commented 7 years ago

I have a question.

  1. I saw the code(class NLayerDiscriminator(nn.Module)), but I do not see the number 70 anywhere. So why is it called 70x70 patchGAN? that is, Why is it the number 70?

  2. the output of the code is 30x30x1. (X_ij) The patch of patchGAN was called 70x70. (ij) You said, you traceback and found that patch ij is 70x70, how did you do it?

phillipi commented 7 years ago
  1. The "70" is implicit, it's not written anywhere in the code but instead emerges as a mathematical consequence of the network architecture.

  2. The math is here: https://github.com/phillipi/pix2pix/blob/master/scripts/receptive_field_sizes.m

emilwallner commented 6 years ago

Here is a visual receptive field calculator: https://fomoro.com/tools/receptive-fields/#

I converted the math into python to make it easier to understand:

def f(output_size, ksize, stride):
    return (output_size - 1) * stride + ksize

last_layer = f(output_size=1, ksize=4, stride=1)
# Receptive field: 4
fourth_layer = f(output_size=last_layer, ksize=4, stride=1)
# Receptive field: 7
third_layer = f(output_size=fourth_layer, ksize=4, stride=2)
# Receptive field: 16
second_layer = f(output_size=third_layer, ksize=4, stride=2)
# Receptive field: 34
first_layer = f(output_size=second_layer, ksize=4, stride=2)
# Receptive field: 70

print(first_layer)
utkarshojha commented 6 years ago

Hi @phillipi @junyanz , I understood how patch sizes are calculated implicitly by tracing back the receptive field sizes of successive convolutional layers. But don't you think batch normalization sort of harms the overall idea of patch-gan discriminator? I mean theoretically each member X_ij of the final NxN output should just be dependent on some 70x70 patch in the original image. And that any changes beyond that 70x70 patch should not result in change in the value of X_ij. But if we use batch normalization then that won't necessarily be true right?

phillipi commented 6 years ago

That's a good point! Batchnorm does have this property. So to be precise we should say the PatchGAN architecture is equivalent to chopping up the image into 70x70 patches, making a big batch out of these patches, and running a discriminator on each patch, with batchnorm applied across the batch, then averaging the results.

utkarshojha commented 6 years ago

Yes that would be a better explanation! And thanks for your response to this.

edoardogiacomello commented 5 years ago

Hello phillipi, thanks for you explaination and for sharing your implementation! I'm also trying to better understand PatchGAN Discriminator, and I understand that is equivalent to a convnet from a design point of view. In other words, if I have to implement a patchgan discriminator, I should do as you did. But what happens if I already got a (pre-trained) neural network which accepts as input the receptive-field (in this case 70x70 images) of a bigger image (e.g. 1024x1024)? I couldn't figure out how the network should be integrated efficiently or rewritten using convolutional layers without modifying the architecture of the pre-trained network. P.S. I'm trying to implement this in tensorflow, but I don't think it's a platform-related issue.

Thank you!

iperov commented 5 years ago

tensorflow extract_image_patches is differentiable func and can be used in training

huicongzhang commented 5 years ago

well,I understood How the PathGAN work!thx.

daifeng2016 commented 4 years ago

Hi, I am wondering why sigmoid activation is not used for pathGAN, since the true patch should be close to 1, while the false should be close to 0.

phillipi commented 4 years ago

The sigmoid is contained in the loss function here. But note that some variants of GAN discriminators don't use a sigmoid (e.g., see LSGANs or WGANs).

daifeng2016 commented 4 years ago

Thanks. Then what is the difference of the output of D without sigmoid. For example, in LSGAN, if the output of D is very large (far from 1 or 0), can the loss function work? since the real labels are still set to 1 and false labels set to 0.

phillipi commented 4 years ago

I believe in LSGAN the loss is squared distance from the labels. So if the output of D is very large, D will get a large penalty and it will learn to make a smaller output. Eventually, D should learn to output the correct labels, since those minimize the loss (and the loss is nice and smooth, just squared distance).

FunkyKoki commented 4 years ago

I would like to share some points on why the patch number is counted by: (output_size - 1) * stride + ksize

Here is what I think. For any i (input feature map size), k (kernel size), p (zero padding size) and s (stride), the output feature map size (o) is: o = floor((i+2*p-k)/s)+1

when calculating patch number, it is supposed that p=0, so it is very clear that the calculation process above is just the opposite of the patch number calculation process.

serwansj commented 4 years ago

Why is a padding of 1 being used in every convolution in the discriminator? If we feed the discriminator an image of size 70x70 we get an output of 6x6. Wouldn't it make more sense to not use a padding and instead get one single output 1x1 for a 70x70 input?

phillipi commented 4 years ago

I think the padding was a holdover from the DCGAN architecture. I can't remember if there is a good reason for it. Might have been to make a 256x256 input map to a 1x1 output, in the DCGAN discriminator.

Zero padding also has the effect that it helps localize where you are in the image, since you can see this border of zeros when you are near an image boundary. That can sometimes be beneficial.

JustinAsdz commented 4 years ago

Can you tell me which line in the code represents patchGAN?

It lies in here. 538th line in the networks.py

xcc13 commented 4 years ago

70

Thank you! But What does 'output_size' mean here?

JustinAsdz commented 4 years ago

70

Thank you! But What does 'output_size' mean here?

It just means the width/height of the output feature map, We can calculate the receptive field of the prior layer according to its output_size

shaurov2253 commented 4 years ago

Hi, as the discriminator outputs 30x30x1 matrix, does that mean the 70x70 patch was moved over the input image 30 times in each direction (horizontal and vertical) to map to single output for all of them?

junyanz commented 4 years ago

Answered at #1106.

yfwang-master commented 4 years ago

Hello phillipi, i am wondering whether 'padding' is necessary in conv processing?

phillipi commented 4 years ago

I doubt it has a big effect. You could try removing it and see what happens.

yfwang-master commented 4 years ago

Thx,and i wonder whether 'PatchGAN' discriminator (convnet in fact in your responsed) is applied to a 3-d model(C-H-W-L 4dim in code)still work? if so,use conv3d() instead right?and so called'3-d PatchGAN' can discriminate the local of 3-d model which is real or fake?

johndpope commented 3 years ago

thanks @emilwallner

Screen Shot 2021-06-15 at 3 18 18 pm
emcrobert commented 2 years ago

The one thing I'm struggling to understand is that the discriminator looks at 70 x 70 patches. But if I understand correctly, it's input is the conditional image concatenated with either the real image or synthesised image. So if it's only looking at small patches at a time, how does it learn the relationship between the two images? How does it check that the conditional input has actually informed the image that has been generated?

junyanz commented 2 years ago

Most of the applications used in the paper only require local color and texture transfer. In these cases, 70x70 patches might be enough (for a 256x input image). Later work (e.g., pix2pixHD) has explored using multi-scale discriminators, which can look at more pixels.

yearep7 commented 2 years ago

If this structure is added to the generator, will it have a good effect? Is there any Ablation Experiment in this regard

CHENHUI-X commented 2 years ago

thanks @emilwallner Screen Shot 2021-06-15 at 3 18 18 pm

Great picture, like it!

cisco08 commented 1 week ago

RF_0 = 1 stride = 2 k_size = 4 ndf = 64 RF_1 = (RF_0-1)stride + RF_0 nf_mult = 1 nf_mult_prev = 1 RF_1 = (RF_0-1)stride + k_size print(1, ndf,RF_1, ) for n in range(1, 12): nf_mult_prev = nf_mult nf_mult = min(2 * n, 8) if n>3: RF_1 = (RF_1-1)1 + k_size else: RF_1 = (RF_1-1)*2 + k_size

print(ndf * nf_mult_prev, ndf * nf_mult, RF_1)

输出是: 1 64 4 64 128 10 128 256 22 256 512 46 512 512 49 512 512 52 512 512 55 512 512 58 512 512 61 512 512 64 512 512 67 512 512 70 这里我发现代码里的最后一层在输入是256x256时,感受野是49并不是70,除非最后一层的卷积再重复8次才会是70x70