Closed viraj96 closed 6 years ago
Sorry about the difficulties with the prior transform — I haven’t been able to update the documentation in quite a while and it’s not as helpful as it should be. It is straightforward to set up different priors for each parameter. I’ve linked to an example I’ve been using for a different project here.
As for the iterative updates, it seems like something that would be much more suited to Sequential Monte Carlo-type methods rather than nested sampling. However, if you plan to run dynesty to completion during each iteration, then I could show you how to make that work.
Thank you for the link for the prior setting and I will have a look at that. After I posted this issue I saw this example here in the demos folder and was trying to replicate the param passing in these functions in a similar way.
I do plan to run the dynesty sampler during each iteration and hence it would be really helpful if you can provide some example or link that explains how to do it.
Also, since my model is iterative i.e the posterior for current iteration will become the prior for the other; is there a way to define a distribution using the samplers result as it would be a set of discrete points.
I do plan to run the dynesty sampler during each iteration and hence it would be really helpful if you can provide some example or link that explains how to do it.
I do want to point out that this problem is very ill-matched for nested sampling -- since you only figure out what the posterior is by integrating over it (by sampling a number of intermediate distributions) rather than sampling directly from it, you only are more limited within the context of iterative models compared to an SMC sampler (which naturally has this style of analysis built in).
That said, assuming you're interested in running dynesty
to convergence at each "iteration" and want to make it so that prior_transform
somehow samples from the posterior estimated at the last iteration, you're really looking at using using the dynesty
results to to construct an parametric estimate for the posterior that you fit to the results, like, e.g., a mixture of Gaussians, which you can then put back in the code for the next iteration. There are a number of packages that let you fit things like this, and if they don't allow for weights you can always use the built-in resample_equal
function to grab a set of equally-weighted samples.
If you want to only consider the previous set of samples that dynesty
has collected, then there's no need for you to repeat sampling at all -- just use importance reweighting to try to update the distribution.
Closing this for now since it appears to have been resolved.
Hi, I am new to the concept of nested sampling and found your repository. I was facing few issues that I needed help with. It would be really helpful if you can provide some useful insights into it.
I have a model that I want to implement using Nested sampler which has 5 parameters. Those parameters help in modeling 2 covariates. I want to define different priors for each of the parameters. Additionally, I have a loglikelihood function which uses those covariates to fit the data. I had a look at the documentation but could not figure out a way to define separate priors for each parameter. Is there a way to do that in the prior_transform function? Also, since my model is iterative i.e the posterior for current iteration will become the prior for the other; is there a way to define a distribution using the samplers result as it would be a set of discrete points.
To make it more clear, A, B are 2 covariates which are defined as follows: mu = log(param1 + param2*A + param3*B) s = 1.0 / (param3*A + param5*B) model = Logistic log-likelihood[mu, s](x) Data(x) comes at one point at a time like online learning. The params are defined on Univariate normal with different mu and sd.