Closed JohannesBuchner closed 5 years ago
get NaNs and Infs in the status
This isn't a bug, but a result of (1) the approximation I'm using for the NS error breaking and (2) just setting up log-likelihood bounds to be consistent with the DynamicNestedSampler. I haven't managed to add this into the documentation yet. I've been thinking about implementing a slightly more robust algorithm for the evidence error to prevent this, but that's still TBD.
but that didn't terminate
I just want to check if you're using the pip install
version or the version on GitHub. It sounds like you might be encountering some problems with the ellipsoid decomposition and/or bootstrapping.
I use the version from github.
Okay, so I've looked into this and can replicate the multimodal solution without a problem. Part of the reason for the convergence issues is that a bunch of the solutions are actually on the edge of the prior (like actually right at the edge), which really messes with the bounding ellipsoids I'm using since all the points pile up right next to the boundary. This drastically reduces the sampling efficiency just by sheer RNG, since a ton of proposed points end up exceeding the boundary of the unit cube. This is compounded by the fact that the likelihood in several other parameters is extremely broad (spanning a good chunk of the unit cube), which also gives ellipsoids that exceed the unit cube bounds. So the ellipsoids just don't end up being very good.
Another issue is that during intermediate stages of sampling the distribution takes a while to split into the final modes. This causes some trouble for my default choice of ellipsoid decompositions (which are much more conservative than MultiNest), which lead to low efficiencies.
I fixed both issues by just (1) shifting the prior and (2) using slice sampling, which appears to give the correct results.
This was an interesting toy problem, so I'll plan to add a modified version of this into the set of examples. Thanks!
I am having some issues with the following toy problem, basically a Bayesian fourier-analysis.
Firstly, I get NaNs and Infs in the status, which I am sure are not coming from the likelihood:
iter: 22178 | bound: 507 | nc: 545 | ncall: 938419 | eff(%): 2.363 | loglstar: -inf < -55.450 < inf | logz: -76.877 +/- nan | dlogz: 5.547 > 1.009
Secondly, in PyMultinest it gives two peaks, as expected. Dynesty only finds one. I suspect is because multinest implements multi-model nested sampling, i.e., doubles the number of live points when modes split. So I increased the number of live-points to 1000, but that didn't terminate. This makes me wonder if it would make sense to add a dynamic nested sampling policy that adds points near when a mode is lost.