TuringLang / GeneralisedFilters.jl

Filtering and smoothing algorithms for state space models with analytic (or approximately/partially analytic) solutions
MIT License
0 stars 0 forks source link

Likelihood computations with conditional resampling #16

Open THargreaves opened 3 days ago

THargreaves commented 3 days ago

I've realised that had a misconception about how to compute the incremental marginal likelihoods at the end of each update in the presence of conditional resampling.

Chopin's book gives this formula,

image

but it's a bit clunky because it requires us to record whether resampling took place. Charles had a nice solution by noting that these formulas coincidence of we reset the weights to unity after resampling.

Since this works I've reverted back to this approach in 731d206.

I'm still a bit uncomfortable with this since really the weights after resampling should be 1/N. I'm not sure whether there is any use case where this distinction would actually matter, but it's worth thinking about. It's also worth nothing that we require an additional logsumexp which it would be nice to avoid.

charlesknipp commented 3 days ago

I cleaned up some of the code earlier today. I agree that the additional logsumexp is pretty annoying, but from what I can tell this is the only proper way to get consistent likelihoods.

I'm still a bit uncomfortable with this since really the weights after resampling should be 1/N

I mean, this is technically true since we softmax a set of equal log weights. Chopin even suggests setting logw = 1 in algorithm 10.3, which yield identical results since we're operating in log space.

Regardless, the newest commits in 7b1007145f398f6ffb5317d61601432f7ece7c86 are now passing all most of the unit tests and the variations discussed on Slack (not counting the GPU filter).