NVlabs / NVAE

The Official PyTorch Implementation of "NVAE: A Deep Hierarchical Variational Autoencoder" (NeurIPS 2020 spotlight paper)
https://arxiv.org/abs/2007.03898
Other
999 stars 163 forks source link

Some questions about the inverse autoregressive flow #11

Open uestcwangxiao opened 3 years ago

uestcwangxiao commented 3 years ago

Hi there, I am just confused by the inverse autoregressive flow. Do you use another network model to fit the distribution q(z|x)? Can I understand in the following way? As far as my know, in the flow based model, people want to model the data distribution p(x) , so from a random z, we can get x=f(z). Here in this paper , q(z|x) is your data distribution you will model, you train another network g, again, from a random e~N(0,I), you get a z=g(e), then you can sample a realistic image through the decoder of the NVAE model, Decoder(z).

arash-vahdat commented 3 years ago

Hi @uestcwangxiao, what we are doing is similar to these two papers where they use normalizing flows to represent flexible approximate posterior distributions: http://proceedings.mlr.press/v37/rezende15.pdf https://arxiv.org/pdf/1606.04934.pdf

We are learning normalizing flows to represent q(z|x) not p(x).