Open yilunzhao opened 4 years ago
It's cross entropy (i.e. neg. log likelihood of multinomial where categories are words), where preds
are already log transformed in the decode function. It's implicitly given in the first part of Equation 7, but not explicitly. It is also hidden in the Estimate the ELBO and its gradient (backprop.)
part of Algorithm 1.
What puzzles me a bit is that at the end of Algorithm 1, variational parameters and model parameters are updated separately but in the implementation they are updated jointly via regular backprop.
Yeah, thanks for your help! I didn't notice it before. What's difference between updating jointly and seperately via regular backprop?
Hi, I cannot understand the expression "recon_loss = -(preds * bows).sum(1)“ in etm.py forward() function. Could you help me explain it? The loss function seems to be different from the equation defined in the paper. Thanks!
recon_loss is the expectation of P(d|\parameters), the first term in eq.7. But I'm curious about how the second term is computed. Why the KL equals to "-0.5 * torch.sum(1 + logsigma_theta - mu_theta.pow(2) - logsigma_theta.exp(), dim=-1).mean()"
Hi, I cannot understand the expression "recon_loss = -(preds * bows).sum(1)“ in etm.py forward() function. Could you help me explain it? The loss function seems to be different from the equation defined in the paper. Thanks!
recon_loss is the expectation of P(d|\parameters), the first term in eq.7. But I'm curious about how the second term is computed. Why the KL equals to "-0.5 * torch.sum(1 + logsigma_theta - mu_theta.pow(2) - logsigma_theta.exp(), dim=-1).mean()"
I've met the same prolbem. Have u solved it yet?
Hi, I cannot understand the expression "recon_loss = -(preds * bows).sum(1)“ in etm.py forward() function. Could you help me explain it? The loss function seems to be different from the equation defined in the paper. Thanks!