Closed segasai closed 2 years ago
I have found that the documentation covers this issue https://dynesty.readthedocs.io/en/stable/quickstart.html?highlight=queue_size#parallelization oops : ) And the curve shown is exactly the log(1+queue_size/nlive)/(queue_size/nlive)
I think the docs just need to be clarified a bit on that as the distinction between queue_size and the number of parallel processes is not really clearly explained.
addressed in 2236388ac60e9cf9a85dc57636c1b5a9c322c92a Closing
This is an attempt to provide justification for parallelization by dynesty.
Specifically how to set queue_size and maybe number of threads. The key question is understanding how queue_size affects efficiency. Because the larger the queue size the more samples are wasted.
Using this simplistic code (below) that mimicks nested sampling of N-dimensional sphere (that relies on the idea that uniform samples within a ball in N-dimensions will have uniform distribution of r^N) it is easy to show how efficiency depends on the queue size. It turns out that basically efficiency is purely a function of queue_size/nlive with efficiency being 1 with queue size of 1 obviously and going to ~ 0.7 or queue size/nlive=1 and 0.55 to queue size/nlive =2 Here's the figure
And the code
I don't yet have firm conclusions here other than the it is good to have queue_size = Nthreads and queue_size< nlive. But also it seems that if nthreads is large it is not unreasonable to have queue size comparable and even bigger than nlive.