NVlabs / imaginaire

NVIDIA's Deep Imagination Team's PyTorch Library
Other
4k stars 448 forks source link

Experiment of using Instance Normalization vs Layer Normalization on the Decoder (MUNIT) #171

Open tom99763 opened 1 year ago

tom99763 commented 1 year ago

Here are the results of using different normalization on the decoder. 圖片1

By the computation operation of the normalization methods, the MUNIT architecture can be summarized as follows.

圖片2

This means that since there's no tuning channel correlation on the upsampling layer (i.e., Adaptive Instance Normalization, StyleGAN), if you use instance normalization during upsampling, the tunned channel correlation (ResNet + Adaptive Instance Normalization) will be destroyed.