Closed stephengreen closed 1 year ago
In fact, the function is sufficiently simple such that we could easily reimplement it for torch tensors within Dingo. That might be the best approach for now, to be able to maintain flexibility in terms of the Bilby version.
Thanks for opening the issue. I will take a look at it.
Hmm, I'm running into this as well when running run_sampler. If anyone is looking for a temp fix pip install bilby==1.2.0
works.
I was trying to cythonize the batching function but it seems under control?
This was addressed in #126. I believe @max-dax has identified one other bug that needs fixing here.
This was addressed in #126. I believe @max-dax has identified one other bug that needs fixing here.
This bug is fixed in commit 0b2fa924f4afb45888193dfc64981bf6fde77909.
A recent update of Bilby (https://git.ligo.org/lscsoft/bilby/-/commit/0241560a732bfa3457f4222a31120f0ec48f4a8f) has broken our GetDetectorTimes transform when used for inference.
During inference,
GetDetectorTimes
callson batched tensors
ra
,dec
. This used to work in the previous version of Bilby, but the recent change replaced this method with a version using bilby.cython in the back end. The new version appears to be an order-of-magnitude faster, but it does not support batching.To fix this
setup.py
to point to a Bilby version number that does not include the bilby.cython calculation.