Open segasai opened 2 years ago
Thinking more about the design of this, the main complexity is the parallel sampling. Ideally one want to have a sampler object like RWalk() keep track of tuning and do the sampling, but then we'd have to send this big object to the pool workers when sampling (that's potentially expensive), but even a more difficult problem is how to combine the results after, i.e make sure we update the scale factors.
I think I have a sketch of a possible new interface.
Again the point is to have samplers as a distinct class that is quite independent from the rest and could be easily replaced. I.e. #366
The idea is to have a two steps in the sampler. One is for running on the main thread/preparing, and the other ones doing the actual sampling.
The actual class will do the internal book-keeping, tuning (i.e. what's currently done by update_rwalk etc) The actual sampling will be a staticmethod to avoid sending the whole class over
class UniformSampler(InnerSampler):
def __init__(self):
pass
def prepare_sample(self, prior_transform, log_like, live_logl, live_u,
live_v, nsamp, bound, rstate, logstar, nonbounded_mask):
res = []
while True:
# need different thing for radsupfriends
u, q = bound.sample(rstate=rstate, return_q=True)
# Check if our sample is within the unit cube.
if unitcheck(u, nonbounded_mask):
# Accept the point with probability 1/q to account for
# overlapping balls.
if q == 1 or rstate.random() < 1.0 / q:
break # if successful, we're done!
res.append(u)
return (prior_transform, log_like), res
@staticmethod
def sample(shared_opt, per_samp_opt):
prior_transform, log_like = shared_opt
return sample_unif(SamplerArgument(u=per_samp_opt))
def tune():
pass
The actual _fill_queue will be something like this
def _fill_queue(self, loglstar):
"""Sequentially add new live point proposals to the queue."""
# All the samplers should have have a starting point
# satisfying a strict logl>loglstar criterion
# The slice sampler will just fail if it's not the case
# therefore we provide those subsets of points to choose from.
one_opt, per_samp_opts = self.inner_sampler.prepare_sample(
self.live_logl, self.live_u, self.live_v, self.nqueue, self.bound,
self.rstate, loglstar)
if self.use_pool_sample:
mapper = self.M
else:
mapper = map
for i in range(self.nqueue):
self.queue = list(
mapper(self.inner_sampler.sample, one_opt, per_samp_opts[i]))
If I understand correctly that will make most nestedsamplers obsolete, as basically those implement uniform sampling, so the code will be just part of the uniform sampler with small differences for different bounds types.
The idea is then providing the samplers to dynesty, i.e.
dynesty.NestedSampler(sampler=dynesty.samplers.Uniform(expand=1.2))
In the new scheme there is no need to split into propose_point/evolve as that's not necessary for some ensemble sampling schemes. Also, in principle in the new scheme it's possible to easily implement the parallel version of uniform sampling. I.e. currently propose_point will always run in the main thread which could be a limiting factor, but here it could be always done in sample(), at the cost of sending pickled bounds to threads.
The downside of the proposal is the change in the interface for people who used update_user etc functionality, but I think the upsides are worth it.
Yea I agree that the upsides for a much better internal structure outweigh the change in interface, and would warrant a jump in version number if things work out.
Thanks. I think the stumbling block on this is that I couldn't find a way to gradually make the changes while keeping the tests working, so all the changes have to be done in one go, which is a bit harder to pull off. 'll see if I have time to do that in Feb. But I'll aim to release the version with plateaus (2.1.0) before that, to make sure it's being tested.
While consider PR's like #365, #366 I come to think that the best architecture is to have a class interface that is responsible provide new samples. I.e. RSlice, RWalk and so forth. The goal of this would be to simplify adding new samplers, like ensemble sampling. Also in theory that would enable passing samplers directly to NestedSampler() at construction, i.e.
NestedSampler(sampler=RSlice(slices=4))
This encapsulation potentially also will do the tuning inside those classes. Currently the tuning of proposals is sort of done at high level like here https://github.com/joshspeagle/dynesty/blob/2236388ac60e9cf9a85dc57636c1b5a9c322c92a/py/dynesty/nestedsamplers.py#L140
Another possible benefit is to enable combining samplers, like
The key questions is what exactly the interface of those samplers should be. I.e. for example, it is not clear to me that is useful to keep the separation of 'propose live-point' vs 'evolve live-point' methods or whether each method should just do the job of
Also, it is not quite clear how much state should be preserved in those samplers, and whether it can be a problem for multi-core runs, if we have to send/pickle those samplers, as well as what should be done with those sampler states on different cpus.
ATM this is not a very concrete proposal, more of an RFC. In the simplest form, the proposal would just bundle sample_rwalk(), sample_rslice() into sample() methods of a class, but that's an easy part.