eriklindernoren / PyTorch-GAN

PyTorch implementations of Generative Adversarial Networks.
MIT License
16.22k stars 4.05k forks source link

Infogan batchnorm2d layers have eps=0.8 ? #116

Open meganset opened 4 years ago

meganset commented 4 years ago

infogan.py creates most batchnorm layers with a 2nd positional parameter of 0.8, e.g. in the generator:

       self.conv_blocks = nn.Sequential(
            nn.BatchNorm2d(128),
            nn.Upsample(scale_factor=2),
            nn.Conv2d(128, 128, 3, stride=1, padding=1),
            nn.BatchNorm2d(128, 0.8),
..
            nn.BatchNorm2d(64, 0.8),

and in the discriminator:

      def discriminator_block(in_filters, out_filters, bn=True):
            """Returns layers of each discriminator block"""
            block = [nn.Conv2d(in_filters, out_filters, 3, 2, 1), nn.LeakyReLU(0.2, inplace=True), nn.Dropout2d(0.25)]
            if bn:
                block.append(nn.BatchNorm2d(out_filters, 0.8))
            return block

This is setting the epsilon parameter from the default of 0.00001 to 0.8 e.g.

>>> torch.nn.BatchNorm2d(128)
BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
>>> torch.nn.BatchNorm2d(128, 0.8)
BatchNorm2d(128, eps=0.8, momentum=0.1, affine=True, track_running_stats=True)

Is this by design? Thanks (thanks for the useful GAN collection)

jybai commented 4 years ago

I noticed the same issue in other GAN implementations as well. Cross comparison with the Keras-GAN repo suggests that 0.8 should refer to momentum as opposed to eps.

DegardinBruno commented 3 years ago

Is there any additional information concerning this issue? Was this made on purpose, or is it just a lapse?