Closed FengqiuAdamDong closed 8 months ago
Yes, I would say so.
At least I did it by basically assuming a continuous parameter with some prior between 0..K which you can then truncate to an integer. I.e. this is implemented here https://github.com/segasai/mdf_modeling_paper (for this paper https://ui.adsabs.harvard.edu/abs/2023MNRAS.520.6091D/abstract )
At least for me it worked fine, but I wouldn't rule out a possibility of some convergence difficulties, depending on the problem.
Specificaly for my case I basically used the prior transform like this
n = $10^{-1 + (MAXLOGN+ 1) * x)}$
where x is the U(0,1) parameter and then truncating n to an integer in the likelihood
Thank you, I read the paper, and it seems like you added a small level of deterministic noise to the likelihoods. But in the footnote, you say that this is not needed anymore in the new dynesty version. So if I were to recreate what you did with the truncation, I no longer need to add the noise, is that correct?
Yes the 2.1 version does not require any artificial noise as it supports likelihoods with plateaus.
I implemented a discrete prior transform, specifically this one https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.randint.html And everything seems to work. Is there any reason to prefer one approach over another?
If the PPF of the distribution maps U(0,1) into discrete numbers, that is probably fine as well. I personally wasn't sure if the discreteness after prior transform is an issue or not (maybe not)
I assume this is solved. Feel free to reopen.
Dynesty version Specify the exact version, and how you installed it (pip/git) 2.1.1 pip Your question Hello, my question centers around discretised prior distributions. In my case, I have a binomial likelihood and I want to sample over N, the number of trials. Is there any way to do this in dynesty?