Closed MrBudgens closed 3 years ago
Am I right in thinking that dynesty.utils.merge_runs can be used to perform Bayesian model averaging?
I had not considered this possibility. My first thought is I do not believe this will work: the procedure used to merge the runs together assumes that they trace the same underlying distribution (i.e. that tracking the relative log-likelihoods between the corresponding runs can be used to determine their combined positions with respect to the prior volume), so if this assumption is violated I don't know if the same results would hold. Have you run any simple tests with, e.g., mixing various Gaussian likelihoods that would at least make this seem plausible? I don't know if I've seen this result published anywhere, so if this is true in at least some ideal cases it'd be worth following up more rigorously.
Also, when I try to use merge_runs I can't use .summary to see the results (using .quantile still works):
Ahhh, this is because after merge_runs
is applied I change all runs to be considered "dynamic" internally. I don't think I ever quite got around to getting the results.summary()
call to function appropriately in that case, but it's a small enough feature request I should just add in that functionality for consistency anyways. I've flagged it and added it to my to-do list. Thanks!
Based on exactly one attempt, the posteriors I merged produced a believable-looking result, but testing with Gaussians seems a better starting point for a real test - I'll investigate. I'm very much a beginner here, but as an alternative it seems to me that samples generated from each model run using resample_equal could be combined to achieve my aim?
samples generated from each model run using resample_equal could be combined to achieve my aim?
Believe so -- it'd be very straightforward to just reweight and sum the points there.
Great, I will try that. It also makes it easy to give more weight to one model if need be.
Hey @MrBudgens , I know this is a lot of months late but just in case.. Following Bayesian model averaging the only things you need in order to do it are each model's evidence, the posterior parameter distributions, and a model prior. Assuming all models are equally probable, doing the model averaging is as easy as doing the weighted sum of each posterior parameter distribution where the weights are the relative probabilities computed with the Bayesian evidence
Am I right in thinking that dynesty.utils.merge_runs can be used to perform Bayesian model averaging? Specifically, if I use two or more different models to infer the same parameters from the same data using the same priors, can their dynamic sampling runs be trivially merged to marginalise over the models? If so, may I suggest a word or two be added to the documentation to make this explicit, with advice on how to assign non-equal weights to models. If not, what have I missed?
Also, when I try to use merge_runs I can't use .summary to see the results (using .quantile still works):
Traceback (most recent call last): File "...dynesty\results.py", line 181, in getattr return self[name] KeyError: 'nlive'
During handling of the above exception, another exception occurred: Traceback (most recent call last): File "", line 1, in
merged.summary()
File "...\dynesty\results.py", line 205, in summary
.format(self.nlive, self.niter, sum(self.ncall),
File "...\dynesty\results.py", line 183, in getattr
raise AttributeError(name)
AttributeError: nlive