Closed zairving closed 1 year ago
Can you try sampling 100000 points from the prior, and evaluating their likelihood?
for i in range(10000):
p = [prior_transform(np.random.uniform(size=ndim)) for i in range(100)]
L = [log_likelihood(pi) for pi in p]
print(max(L))
see https://johannesbuchner.github.io/UltraNest/debugging.html#Finding-model-bugs
Hi Johannes,
Thanks for the quick, and helpful, reply!
Doing as you suggested, it seems the problem is with tinygp. Without jitting loglike
, I got the same runaway memory usage issue, but jitting seems to plug the leak. I'll open an issue on the tinygp repository instead.
Description
I'm trying to use UltraNest to perform model selection with tinygp models, but when I run my script it slowly eats all the available memory in my machine (unless it converges before my system runs out of memory). I've found two stopgap solutions, but both are flawed:
1) JIT-compiling the likelihood function (i.e., with a
@jax.jit
decorator) seems to solve the issue, but this restricts me to jittable likelihood functions (which I can't use in all cases).2) I've written a bash script which will run my script and check how much memory it's occupying every 30 seconds; if the memory usage gets too high, it kills the process and starts the script again (resuming from
log_dir
). This is also problematic because it requires the points inlog_dir
to be updated before the process is killed - which not guaranteed when the sampling is slow and/or efficiency is low.I've used both tinygp and UltraNest independently without issue, but together something seems to be going wrong.
What I Did
Here's an example script I adapted from a tutorial in tinygp's documentation which illustrates my problem: