Closed piggybiggy closed 5 years ago
We found that normalization in the first layer removes too much information. Here's the reasoning.
In case of image2image translation, information such as the brightness and contrast of the input image is valuable. If you normalize too early, this information disappears before the network has a chance to process. We observed that in a couple of datasets, adding normalization in the first layer resulted in worse visual quality.,
@taesungp Thank you!
Hi, thanks for the paper and code. While reading your implementation of discriminator, here https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/07ae2e998243619c86282fa53d6fe48bdac94d73/models/networks.py#L538 I've noticed that there is no InstanNorm2d before LeakyReLU at the first layer of the discriminator. Is there a reason for that? https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/07ae2e998243619c86282fa53d6fe48bdac94d73/models/networks.py#L558