NVlabs / imaginaire

NVIDIA's Deep Imagination Team's PyTorch Library
Other
4.02k stars 452 forks source link

Experiment of using Instance Normalization vs Layer Normalization on the Decoder (MUNIT) #171

Open tom99763 opened 2 years ago

tom99763 commented 2 years ago

Here are the results of using different normalization on the decoder. 圖片1

By the computation operation of the normalization methods, the MUNIT architecture can be summarized as follows.

圖片2

This means that since there's no tuning channel correlation on the upsampling layer (i.e., Adaptive Instance Normalization, StyleGAN), if you use instance normalization during upsampling, the tunned channel correlation (ResNet + Adaptive Instance Normalization) will be destroyed.