Open sumitaghosh opened 1 month ago
Don't forget that Bayesian inference is sampling the posterior, not optimizing the likelihood. If such a peak is a tiny fraction of the prior space, and you have other huge ranges of the prior space with moderate likelihood, then the small peak may not be important to include because it does not contribute to the integration.
What you can do:
Thank you!
I'm not sure if I understood the last bullet point you made, but I did change the prior slightly and was able to get a higher likelihood. The first and second bullets can probably be summarized in this plot:
I'm also not sure how the analyzer chooses to stick around at a place. Here's the likelihood for every step in the chain (sequentially) from another run where I imposed more constraints, and you can see that it's smooth for a long time (why?) and then it suddenly decides to explore locally but only for a short period of time. How does it choose where to stop and explore?
I have a likelihood that has a linear relationship with 39 parameters and involves creating an 89x40x3 array then summing its elements together. Back when I only had the 89x40 array, the best-fit worked fine. Now, it's converging on a wrong value. There is a certain combination of parameters (within the prior space and not at the edge) that gives me a likelihood of -38565.34122985974, but PyMultiNest never seems to find any data points that give a likelihood of above -40k. I have tried pymultinest.solve with n_live_points=2000 and sampling_efficiency=1.0 but that still leads to a weird behavior of never getting a higher likelihood and of "Parameter 1 of mode 1 is converging towards the edge of the prior." What else can I try? I can't give a minimum working example because it only seems to break like this with the entire thing, so is there any other information that would be helpful here?