Closed segasai closed 3 years ago
This is a very detailed set of results -- thanks for running these tests and doing such a careful job tracking things down.
I think this line is counter-productive...So I think we should get rid of that exception. If anything, we should put a warning or exception if there are too many expansions.
Agreed. This has now been resolved by using the loglstar threshold, which guarantees success. Including a warning to flag possible issues is a good addition.
Possible solutions:
- Limit the step size of the slice sampler to be < sqrt(Ndim)/2
Yes, setting a sensible upper limit makes a lot of sense. I support the larger value of sqrt(Ndim)/2 based on the length of the diagonal of the Ndim-hypercube.
- Changing the code bounding_ellipsoids. The current code is not giving the smallest volume ellipsoid.
Yes, that seems reasonable. A slower but more robust method seems like a reasonable thing to do, especially as a fallback option if fmax is large.
the root cause of the problem is the volume scaling in bounding_ellipsoid() ... this basically blows the ellipsoid to be 700 times the cube size.
The best fix might be to constrain the maximum size of a given axlen to be on the order of sqrt(Ndim), thereby limiting the amount to which an ellipsoid can blow up this way.
Should be resolved as of #269.
I don't think this is fully closed. And some of the thoughts mentioned here need to be implemented.
Ah, no, you're right. I was overeager. The most recent PR did not include an upper bound on the slice proposals or the ellipsoid volumes. Reopening this.
PR #271 now includes additional checks on the slice proposals. It doesn't quite deal with the ellipsoid volume issue AFAIK, but I think it'd enough that I can consider this closed. (The bounding problem can probably be opened as its own separate issue.)
While continuing narrowing down rslice sampling issues leading to these kind of errors
I have a few comments (no patch yet) I think this line is counter-productive https://github.com/joshspeagle/dynesty/blob/a55832be419e4e019de7142f9c9651d0acc31326/py/dynesty/sampling.py#L752 as the slice sampling is guaranteed to succeed if the starting point satisfies the criterion (see Neal+2003). And now that seems to be the case. So I think we should get rid of that exception. If anything, we should put a warning or exception if there are too many expansions.
Another point is the axlen value -- step-size for slice sampling.
First I think it's a question whether axlen > 1 or > sqrt(ndim)/2 make sense. Possibly not, and we may need to truncate it (we only need to think about how that affects scaling of slice sampling)
Now the question is why the axlen is so large -- i.e. why the axis length of an ellipsoid can be that large. And the tests using the function from test_pathology.py show what's can happen: For some distribution of points, i.e like shown here
The ellipsoid parameters chosen from the covariance matrix can be such that the fmax value https://github.com/joshspeagle/dynesty/blob/a55832be419e4e019de7142f9c9651d0acc31326/py/dynesty/bounding.py#L1440 is > 50. That means that the ellipsoid is expanded by this much. And that in turn (I think) can lead to large axlen of 700 and that can easily lead to sampler stuck message.
Possible solutions:
And finally while I was writing this, I was continuing the investigation. is that the root cause of the problem is the volume scaling in bounding_ellipsoid() Basically we have an extremely narrow elllipsoid. and at certain point bounding_ellipsoid() is called with volume of 0.001996 (don't know where this comes from) and this basically blows it the ellipsoid to be 700 times the cube size.
It's unclear what's the best fix for that...
The problem is demonstrated with the code