adjidieng / DETM

MIT License
130 stars 39 forks source link

What exactly is NELBO and why do we optimize it? #8

Open legurp opened 3 years ago

legurp commented 3 years ago

Can someone tell me why we optimize NELBO? In the paper it only said "We optimize the ELBO with respect to the variational parameters." As far as I understand it D-ETM consists of three neural networks to find the distributions for theta, eta and alpha and then estimates KL divergences for them. And then the KL divergence values are simply added together and optimized jointly? But why is NLL added? And I thought that "Solving this optimization problem is equivalent to maximizing the evidence lower bound (ELBO)" would mean that we don't minimize it as a loss which the model seems to do but rather maximize it.

Sorry, I am pretty confused (I am rather new to Bayesian statistics and variational inference)

legurp commented 3 years ago

in detm.py in the forward() function it says: nelbo = nll + kl_alpha + kl_eta + kl_theta return nelbo, nll, kl_alpha, kl_eta, kl_theta

in main.py it says: loss, nll, kl_alpha, kl_eta, kl_theta = model(data_batch, normalized_data_batch, times_batch, train_rnn_inp, args.num_docs_train) loss.backward() optimizer.step()

mona-timmermann commented 3 years ago

The following paper might be helpful: https://arxiv.org/abs/2002.07514

jfcann commented 3 years ago

Hi @legurp, NELBO is the "negative ELBO", and NLL should stand for "negative log-likelihood". Usually people state they are maximising ELBO, it's true, but since logs of probabilities give you a quantity <=0, it's often more convenient to multiply it by -1 (so that it becomes positive) and then minimise this new quantity (as a loss).