The Cauchy prior on the log-normal variance (likelihood) might not be the best choice here. It might create poor posterior geometry at no gain on the modelling side. Would a simpler standard half normal or even a Gamma(1, 1) prior work OK here?
The PyMC3 HMC sampler is struggling to lock onto and maintain a target acceptance probability. We need to check why that is;
Would it be possible to mark the generating ("true") parameter values in the posterior plots so we have a way to gauge parameter recovery?
Would be nice to implement predictions early on, so we can compare the posterior predictive to the observed data.
We can explore other priors for the log-normal variances But I think we can do that on another PR
The issue the the acceptance probability may be resolved by increasing the number of burnin samples, I force down the number of samples to get some quick and dirty first estimates.
Yes it is possible to improve the posterior plots, I'll do that next.
Agreed. Also like to always compare posterior curves with data. It's not hard to do.
Added two new inference runs: One with the full SIR and another with the 1D.