JohannesBuchner / PyMultiNest

Pythonic Bayesian inference and visualization for the MultiNest Nested Sampling Algorithm and PyCuba's cubature algorithms.
http://johannesbuchner.github.io/PyMultiNest/
Other
194 stars 88 forks source link

PyMultiNest crashes with Python 3.11 #238

Closed sahiljhawar closed 1 year ago

sahiljhawar commented 1 year ago

Since PyMultiNest is implemented in bilby. We use the same for our Bayesian inference in NMMA. While working with Python 3.8, 3.9, 3.10, all the unit tests pass however it fails with Python 3.11. Find below the error we recieved:

Traceback (most recent call last):
    File "/opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/site-packages/pymultinest/run.py", line 221, in loglike
      return LogLikelihood(cube, ndim, nparams)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  TypeError: solve.<locals>.SafeLoglikelihood() missing 1 required positional argument: 'lnew'

In case you want to know more about the implementation and want to have a quick at the failure, here's the commit in our repo

JohannesBuchner commented 1 year ago

The error you copied is different to the CI log, which ran multinest successfully, just did not like the output.

See also https://github.com/brinckmann/montepython_public/issues/144

sahiljhawar commented 1 year ago

If from the CI logs you mean this error ValueError: could not convert string ...{some extremely small number}. Then this is a usual error when sampling fails, as far as I know. But the error which is mentioned above only happens with Python 3.11 Until now we haven't faced this issue with any of the other Python versions.

thjsal commented 1 year ago

I remember getting the same (or at least very similar) error as when having problem with #237. So maybe you are using the latest pypi version of PyMultiNest and using the latest GitHub version would solve the problem? I could be wrong though.

sahiljhawar commented 1 year ago

Sorry, I didn't understand, "...using the latest GitHub version would solve the problem?"

JohannesBuchner commented 1 year ago

The sampling has not failed, but fortran writes the result out in a weird format that is not readable. This happens when the likelihood is <-1e300.

sahiljhawar commented 1 year ago

Umm okay. Thanks for the clarification on that. But the issues still persists, I saw that there's a merge and tests happens with Python 3.11 and it does not fails where it fails with our tests.

JohannesBuchner commented 1 year ago

Please post a minimal test case here.

sahiljhawar commented 1 year ago

I tried with Python 3.11 and things are not working out. While testing the our package scripts, with MPI, sampler not found error occurs, without MPI Safeloglikelihood error occurs. But if I perform the tests with the examples as given in your repo, the tests are successfully passed. Also pymultinest from conda doesn't seems to work, I installed it with pip and built multinest from the source.

sahiljhawar commented 1 year ago

Therefore I can't provide a test case which is failing as in our case.

JohannesBuchner commented 1 year ago

OK, sounds like the issues lie in your pipeline somewhere, so I am closing this. For the conda install, if you have an actionable error message beyond "does not seem to work", please open a new issue.

sahiljhawar commented 1 year ago

Reopeing this issue since I got the reason why it was not working. I again tried everything from scratch, first installed pymultinest from conda in order to not build the multinest from scratch. This version still had the same Safeloglikelihood problem. Then I removed the conda installed pymultinest and then cloned your repo and installed pymultinest from there. And it worked. So maybe updating the conda with the latest version available at github will work.

In my opinion the no existence of this https://github.com/JohannesBuchner/PyMultiNest/blob/c8eba95ecc9583c6662e868e74bbd9992b71a787/pymultinest/run.py#L211-L215. in the PyPi and Conda breaks this. As previously mentioned in #237 as well.

sahiljhawar commented 1 year ago

@thjsal And now your comment makes more sense 😅