Closed marcoct closed 7 years ago
The semantics of the current backpropagation is to take the gradient of the total score function with respect to parameters, treating any stochastic nodes that were not scored as constants that don't depend on the parameters (even if the distribution of these stochastic nodes does depend on the constants).
The semantics of the stochastic extension to backpropagation is to estimate the gradient of the expected total score function with respect to parameters, where the expectation is taken with respect to stochastic nodes that were not scored but may depend on the parameters.
It will also be necessary to work out the case where stochastic nodes are scored themselves (i.e. when proposing).
Skimmed the paper. Looks like they may be doing the same thing that Venture does for this circumstance, in what I've been calling "the fixing randomness trick", though I didn't try to follow the math carefully enough to be sure.
The black box variational inference example (https://github.com/probcomp/gen-examples/blob/df2e16a5c5280e029a19036754fc41a36898b5cb/black-box-variational-inference/linreg_bbvi.jl) should simplify further when this feature is added---I think the gradient . score product maybe become unnecessary because it will be performed automatically since the output nodes are stochastic.
Implement the technique in this paper by Abbeel's group: https://arxiv.org/pdf/1506.05254.pdf This should allow us to very concisely express variational autoencoders. It's a very cool technique, and ideally suited to be implemented in Gen.jl.