GFNOrg / GFlowNet-EM

Code for GFlowNet-EM, a novel algorithm for fitting latent variable models with compositional latents and an intractable true posterior.
https://arxiv.org/abs/2302.06576
MIT License
38 stars 2 forks source link

How can we sample from GFN unconditionally in discrete vae case? #3

Closed bugrabaran closed 9 months ago

bugrabaran commented 9 months ago

In paper it says "VQ-VAEs assume a uniform prior over the discrete latents 𝑧. However, GFlowNet-EM enables us to also learn a prior distribution, 𝑝 𝜃 (𝑧), jointly with the decoder 𝑝 𝜃 (𝑥|𝑧)." in section 5.3. If I am not mistaken we should be able to generate unseen examples/data by sampling from the encoder/gfn and give this as an input to the decoder. But I am confused about how we can sample from this learned prior? Is it going to come from PixelCNN which is trained with gfn in parallel?

Any help is much appreciated!

AlexGraikos commented 9 months ago

In the setting where we use a PixelCNN to learn the prior (instead of assuming it is uniform) you can directly sample a latent representation from the trained PixelCNN and decode it into an image. Section D in the Appendix describes it in more detail.

In the code, you can look at the sample_prior(prior, batch_size) function in train_gfn_prior.py (line 15) where for each of the latents in the $l_h \times l_w$ latent grid we sample a categorical value.

bugrabaran commented 9 months ago

Thanks for the explanation !