NVlabs / NVAE

The Official PyTorch Implementation of "NVAE: A Deep Hierarchical Variational Autoencoder" (NeurIPS 2020 spotlight paper)
https://arxiv.org/abs/2007.03898
Other
1.02k stars 164 forks source link

FID score of CelebA-HQ 256x256 #30

Open jychoi118 opened 2 years ago

jychoi118 commented 2 years ago

I'm quite confused about FID of CelebA-HQ.

In NCP-VAE and VAEBM paper, it is reported as 40.26, while recent paper LSGM reported as 29.76.

Were there further improvement of NVAE after publication of NCP-VAE and VAEBM?

arash-vahdat commented 2 years ago

In NCP-VAE and VAEBM, we trained new NVAEs from scratch using a Gaussian image decoder (i.e., p(x|z)). This was primarily because for VAEBM, we needed to backpropagate through generated images in the decoder, and this was easy to formulate with a Gaussian decoder using the reparameterization trick. This decoder type was not needed for NCP-VAE as it was completely formulated in the latent space. But, at the time, we didn't know about the implications of the Gaussian decoder.

In the original NVAE paper and later in LSGM, we used the discretized logistic mixture distribution in the decoder. You can read about this distribution in this paper. When writing the LSGM paper, we went back and computed FID for the original publicly available NVAE checkpoints and we were also surprised to see that they obtain a lower FID (29.76) compared to NVAEs trained for NCP-VAE and VAEBM (~40).

Here is why we think the FID score gets better with the discretized logistic mixture: This decoder is a better statistical model for representing pixel intensities in an image and it forms simple conditional dependencies between the RGB channels. In contrast, the Gaussian decoder is a simple model that predicts RGB channels independently. Our experiments show that the discretized logistic mixture requires encoding less information in the latent space to reconstruct input images which in turn translates to having fewer prior holes in the prior distribution. Because of this, it seems that the FID score gets better with this decoder.

I hope this clarifies the confusion. If you have any further questions, please let me know here.