Open nalourie opened 9 years ago
A full state transition requires some annoying book-keeping (e.g., what if some random variables disappear or have new types / supports?)
A simpler solution: change the proposal distribution to sample directly from the prior with some small probability. Thoughts, @ngoodman?
You could also try periodically allowing it to transition to a state that doesn't satisfy the condition, and not storing it in the sample when you do.
(Edit: accidentally closed the issue, sorry about that.)
Because the implementation of MH updates the random variables one at a time, the resulting chain is not ergodic for certain models and thus doesn't actually sample from the posterior distribution. Example:
I ran this code on the probabilistic models of cognition book's website.
My quick suggestion for a fix: do a full state transition (every dimension) regularly in the beginning, then once you have an estimate for how many "components" the markov chain has, you can decide how often you want to do a full state transition after that.