Closed bilby-bot closed 2 weeks ago
In GitLab by @git.ligo:alexandresebastien.goettel on Feb 23, 2024, 11:20
Hi @git.ligo:colm.talbot, @git.ligo:michael.williams and I have been looking at this issue and the problem is that when the number of points (u_list) is too short when compared to the autocorrelation length, the number "n_found" of points after thinning will be zero after thinning, leading to ZeroDivisionErrors down the line. This is in principle independent of 'nact', though of course it happens more quickly when 'nact' is large.
It seems like an easy fix if we add a clause to just return the current points in that case (the same that happens when the estimated act is infinite), but given that there are several ways to go about this that can depend on the details of the implementation, we thought that talking about it with you would be the best way to go forward, what do you think?
Thanks!
In GitLab by @git.ligo:michael.williams on Oct 3, 2024, 17:55
unassigned @git.ligo:alexandresebastien.goettel
In GitLab by @git.ligo:noah.wolfe on Feb 10, 2023, 20:07
When running PE on a BBH injection with
dynesty
, using the newact-walk
sampling method, if the number of autocorrelation lengthsnact
is large, probably > 10, (in the example below, I usednact = 30
), eventually a division-by-zero error gets thrown. This occurs because we thin the MCMC chain used to propose the next dynesty step bynact
, but these chains will closer to ~a few autocorrelation lengths with the improvements to the computation of the autocorrelation length with theact-walk
method. (Thanks for the help in understanding this issue @git.ligo:colm.talbot, please feel free to edit my remarks here.)Code version: 15cb27e0 on dynesty-differential branch
Log snippet with dynesty settings:
Error trace: