tensorflow / probability

Probabilistic reasoning and statistical analysis in TensorFlow
https://www.tensorflow.org/probability/
Apache License 2.0
4.25k stars 1.1k forks source link

Score Function Estimator for Black Box Variational Inference #622

Closed dionman closed 4 years ago

dionman commented 4 years ago

I was wondering how a Monte Carlo estimate of ELBO should be written so that automatic differentiation correctly derives the gradient after applying score function estimator, as written in equation (3) in BBVI paper https://arxiv.org/pdf/1401.0118.pdf: Should I explicitly include the expression for the joint_log_prob of variational approximation and apply automatic differentiation only to this term, or is that done already when computing the gradient of the ELBO (as I understand is the case in autograd implementation https://github.com/HIPS/autograd/blob/master/examples/black_box_svi.py)?

brianwa84 commented 4 years ago

I assume you figured this out? Many of the samplers in TFP are reparameterized explicitly (eg. Normal) or implicitly (eg Gamma) so that TF autodiff just works. The score function estimator can be quite bad, so I think you would want to look at relaxations for discrete variables and reparameterized snappers for continuous.

dionman commented 4 years ago

Thanks. I implemented Blackbox VI for my model defining an appropriate tfp.monte_carlo.expectation under a non-reparameterized distribution.