Open uestcwangxiao opened 3 years ago
Hi @uestcwangxiao, what we are doing is similar to these two papers where they use normalizing flows to represent flexible approximate posterior distributions: http://proceedings.mlr.press/v37/rezende15.pdf https://arxiv.org/pdf/1606.04934.pdf
We are learning normalizing flows to represent q(z|x) not p(x).
Hi there, I am just confused by the inverse autoregressive flow. Do you use another network model to fit the distribution q(z|x)? Can I understand in the following way? As far as my know, in the flow based model, people want to model the data distribution p(x) , so from a random z, we can get x=f(z). Here in this paper , q(z|x) is your data distribution you will model, you train another network g, again, from a random e~N(0,I), you get a z=g(e), then you can sample a realistic image through the decoder of the NVAE model, Decoder(z).