odegeasslbc / FastGAN-pytorch

Official implementation of the paper "Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis" in ICLR 2021
GNU General Public License v3.0
600 stars 100 forks source link

Why the code use skip-layer channel-wise excitation to extracts feat_64, but the paper does not? #30

Closed xuewengeophysics closed 3 years ago

xuewengeophysics commented 3 years ago

Hi. Thanks for your excellent research work. I have a question why the code use skip-layer channel-wise excitation to extracts feat_64, but the paper does not?

feat_64 = self.se_64( feat_4, self.feat_64(feat_32) )

image

odegeasslbc commented 3 years ago

It depends on how large your model is, in the paper the figure shows a model outputs at 256x256 resolution, but in the code its for 512 or 1024 resolution.

xuewengeophysics commented 3 years ago

It depends on how large your model is, in the paper the figure shows a model outputs at 256x256 resolution, but in the code its for 512 or 1024 resolution.

Thanks very much.