deepgenerativemodels / notes

Course notes
MIT License
553 stars 121 forks source link

Missing reference in https://deepgenerativemodels.github.io/notes/vae/ #12

Closed AndreaPi closed 2 years ago

AndreaPi commented 5 years ago

In https://deepgenerativemodels.github.io/notes/vae/, paragraph

Learning Directed Latent Variable Models

states that

As we have seen previously, optimizing an empirical estimate of the KL divergence is equivalent to maximizing the marginal log-likelihood logp(x) over $D$

This isn't mentioned anywere in the rest of the course notes, . It would be useful for the learner to add the proof of this equivalence, or at least a reference to it.