joshspeagle / dynesty

Dynamic Nested Sampling package for computing Bayesian posteriors and evidences
https://dynesty.readthedocs.io/
MIT License
346 stars 76 forks source link

Best way of using MAP/ML point when fitting #437

Open segasai opened 1 year ago

segasai commented 1 year ago

This is an open-ended question.

Often there are situations when there is a known Maximum likelihood or MAP point, but one is still interested in sampling the posterior around it. When using things like emcee it's trivial, you just sample a ball around that point and use that to start the sampling. The question is whether there is a way of somehow doing something similar with dynesty, which doesn't necessarily involve running a full nested sampling. Obviously these runs won't be very useful for evidence calculations.

Possible ideas.

lalit-pathak commented 1 year ago

So if we define a ball/ellipsoid/cube in the posterior space around the ML point, How do we exactly set the boundaries? In the case of small boundaries, we could get railings in the posteriors, right? In that case, we would either need to repeat the exercise for various boundaries or make an effective fisher matrix covering some given volume of the posterior.

segasai commented 1 year ago

My thinking was that if we define an ellipsoid around the MAP value and then maybe sample from the prior that has 99% of the volume inside the ellipsoid and 1% outside. This way the majority of the sampling will be focused on the ellipsoid, but if there is substatial posterior volume outside, it'll likely still be captured. But this is a vague idea, I am not sure it's implementable.

Specifically if x is parameter within the unit Cube then the posterior is just $1/Z L(x) $ , but now if we adopt the prior $\pi(x)$ so with the volume requirement given above and then we'd sample the posterior of the form $\pi(x) (1/z 1/\pi(x) L(x))$
This is technically the same posterior as before, but the sampling will mostly avoid low L regions.

The problem is I'm not sure there is a parameter transformation implementing this kind of prior.

mvsoom commented 7 months ago

One way of using the MAP approximation is to fit a MVN to it (Laplace approximation) and use that as a proposal distribution, to be incorporated into the prior, much like the expressions in your previous reply. Here is a fine short paper exploring this idea: https://arxiv.org/pdf/2212.01760.pdf.

lalit-pathak commented 7 months ago

@mvsoom Thanks for posting this nice paper here. I will look into it.