Closed kazewong closed 1 month ago
The nan
values are usually observed in the output list of log likelihood, when it is trying to generate a set of reference parameters for the heterodyning likelihood.
When we try to search for the minimum log likelihood using argmin()
on a list containing nan
values, it will result in a list of nan as the reference parameters, causing the program to run into a bug. I think it would be a better idea to replace argmin()
with nanargmin()
. This will allow the code to search for the minimum log likelihood while ignoring the nan
values. This approach will enable the code to proceed, but it should still output a warning to notify the user that nan
values were present in the log likelihood list.
In the long run, as mentioned by Kaze, it would be a good idea to get rid of sharp cut boundary, or make sure we always transform the sample to unconstrained first.
@xuyuon verified the problem to be the sharp boundaries. We need to change all prior with a sharp boundaries to sample from the unconstrained uniform prior instead. After the priors are properly implemented, the nanargmin
used should be changed back to argmin
.
Continue in #98
Currently, the optimization routine uses a population adam optimizer to try to compute the reference parameters, there are some cases this optimizer might return
nan
, usually due to some boundary issue in the prior.Without modifying the code, using the
unconstrained_uniform
prior should alleviate if not solving the problem.As a first patch it would be good to able to let the user decide what kind of optimizer they want to use. But more important, I think we should think about setting hard boundary in the prior. Because the sampler relies on gradient heavily, perhaps we should discourage such behavior in general