tweag / monad-bayes

A library for probabilistic programming in Haskell.
MIT License
410 stars 62 forks source link

Partial exact inference, inspired by "Delayed Sampling and Automatic Rao–Blackwellization of Probabilistic Programs" and ProbZelus #224

Open turion opened 2 years ago

turion commented 2 years ago

While reading through the ProbZelus paper, I became aware that there are techniques to do partial exact inference in Bayesian networks, using conjugate priors. This is described e.g. in https://arxiv.org/pdf/1708.07787.pdf, and the algorithm is improved in the ProbZelus paper. I believe that after #177, something similar can be implemented here. Basically, we would need a way to specify a Bayesian network formally, through a monad interface. All the probability distribution functions like normal etc. would then not output a sample of that distribution, but a formal variable that can later be used to condition. This can not quite be implemented as a MonadCond, but a more specialised version (that equates two expressions in the monad instead of conditioning on an arbitrary boolean) would have to be implemented. There is some discussion and links to some initial work here: https://github.com/tweag/monad-bayes/issues/144, https://github.com/tweag/monad-bayes/issues/144#issuecomment-1228708170 https://github.com/tweag/monad-bayes/issues/144#issuecomment-1228786940

Some pseudocode sketch:

data BayesianNetworkT m a = ...

type VarIndex = Int
data FormalVar = Constant Double | UniformDist VarIndex | NormalDist VarIndex FormalVar FormalVar | (:+: FormalVar FormalVar) | ...

newVar :: BayesianNetworkT m FormalVar

instance MonadSample (BayesianNetworkT m) where
  type Real = FormalVar

  random = newVar

observe :: BayesianNetworkT m FormalVar -> FormalVar -> BayesianNetworkT m ()

The observe function would have to condition on the conjugate priors and output a simplified network.

reubenharry commented 2 years ago

Ah, I share your interest in investigating this. I'll have a read of the paper too. I've also thought about doing what I suggest above, but have never thought through the details.

I have four very general thoughts that are not totally related, but here seems like a place to put them:

  1. Have you seen https://arxiv.org/pdf/2112.13251.pdf? This seems like a thing you might find interesting, that is also in a similar space (i.e. partial exact inference, reactive). I'm not disinterested in implementing it in Haskell, especially given the recent progress on reactive stuff.

  2. I wonder if your proposal here is related to how Gen (another PPL) handles probabilistic programs. In general my sense is that the Gen people know what they're doing, and I know that some Gen-related techniques are used in the Haskell probzeleus implementation, so I've been meaning to inspect it carefully.

  3. There's a talk here https://www.youtube.com/watch?v=xLgqx4DK49k&t=4s&ab_channel=ACMSIGPLAN about a different way to do observations. Mostly unrelated probably, but just noting it for good measure.

  4. I know that hakaru, another PPL, which is implemented in Haskell, handles conjugacy and various other things by having a static representation of a probabilistic program.

turion commented 1 year ago

CC @idontgetoutmuch

turion commented 1 year ago

For some WIP, see https://github.com/turion/monad-bayes/tree/dev_delayed_sampling