Open stephengreen opened 1 year ago
Could you point me to a directory containing an example which I can try to replicate?
I wonder if it's all the repeated calls to get_calibration_factor()
, and if this could be better optimized.
But I also don't remember it being this slow. Maybe it's the machine I am running on (saraswati).
Could you point me to a directory containing an example which I can try to replicate?
On saraswati, /data/sgreen/dingo-experiments/pipe/speed_gnpe/GW150194.ini
.
Calculating 484 likelihoods.
Done. This took 16264.02 seconds.
Indeed I think it is repeated calls to get_calibration_factor. This is a function from Bilby to get the calibration curve from the calibration parameters. Maybe there is a way to vectorize this operation given a matrix of the calibration parameters.
Even if this takes 40x the waveform generation time, the latter is quite fast for XPHM, and wouldn't explain 400s per likelihood evaluation.
I think it would be worth testing this in isolation: set up a CubicSpline
object and repeatedly call get_calibration_factor()
for the envelopes used in this run. I can't see anything that would be very slow in https://git.ligo.org/lscsoft/bilby/-/blob/master/bilby/gw/detector/calibration.py#L242.
I am finding that when calibration marginalization is turned on, it significantly slows down likelihood evaluations. This is expected to some degree, but I would have thought it would be much faster.
For IMRPhenom waveforms (
f_min=20.0
,f_max=1024.0
,delta_f=0.125
) and INI settingsthen ~ 500 likelihood evaluations takes many minutes.