blei-lab / edward

A probabilistic programming language in TensorFlow. Deep generative models, variational inference.
http://edwardlib.org
Other
4.83k stars 759 forks source link

mean field for Markov Random Fields: performance? #71

Open weiliu620 opened 8 years ago

weiliu620 commented 8 years ago

This question is more like a user mail list question, but it looks there is no user group yet, so please let me ask it here:

If the observation and hidden variable is in a image domain where I assume a Markov random field with simple pairwise connectivity. The number of hidden variables will be number of pixels in the image. Will the mean field inference algorithm work for such large number of variables? I'm more interested in the performance.

I've seen some work of mean filed as a convolutional layers from the deep learning community, and I'm curious how tensor flow is used for the mean field update in this library.

I haven't done any experiment yet. Just saw someone tweeted about this project and it looks great!

dustinvtran commented 8 years ago

hi @weiliu620: Mean-field using black box inference will scale if you take advantage of Rao-Blackwellization, that is, the update of a latent variable z_i only depends on its Markov blanket; see Ranganath et al. (2014). This will do well so long as the Markov random field is not too highly connected, such as a sparse graph or using blockwise/convolutional components.

In Edward, we haven't implemented Rao-Blackwellization yet because it requires a graphical modeling language with information about the model's graph structure. If you have a suggestion for this, it would be an excellent step forward. (We may have to write this language ourselves.)