taesungp / swapping-autoencoder-pytorch

Official Implementation of Swapping Autoencoder for Deep Image Manipulation (NeurIPS 2020)
Other
516 stars 84 forks source link

Why are the spatial and global code normalized twice? #16

Closed sunshineatnoon closed 2 years ago

sunshineatnoon commented 3 years ago

Hi, Thanks for open-sourcing this awesome work! I noticed that there seems to be two normalization operations on the spatial and global code, one in the end of the encoder and one in the beginning of the generator, is there any special reason for this design? Thanks!

taesungp commented 2 years ago

Hello, I did not have a special reason for that. It's indeed redundant. It was there because I was experimenting with different encoder / generator configurations, so I needed that "double proofing" normalization.