Open xiyan524 opened 5 years ago
This formula is the closed-form KL divergence for the ELBO objective. This model assumes that topic proportion vectors are distributed via a multi-variate gaussian; this closed-form objective you've just described punishes the VAE for straying too far away from the normal.
Reading this paper may help! https://arxiv.org/abs/1312.6114
Thanks a lot~
Hi, thanks for your excellent work.
I have some question about formula of kl_divergence. As mentioned in the code, the formula is : kl_divergence = torch.ones_like(mu) + 2 * log_sigma - (mu 2) - (torch.exp(log_sigma) 2) while I think standard formula is : kl_divergence = torch.ones_like(mu) + log_sigma - (mu ** 2) - torch.exp(log_sigma)
Therefore, I'm curious about this part. Is there anyone who can provide some help?