dingo-gw / dingo

Dingo: Deep inference for gravitational-wave observations
MIT License
55 stars 18 forks source link

Detector Calibration Uncertainty #117

Closed nihargupte-ph closed 1 year ago

nihargupte-ph commented 1 year ago

For PE review a requirement is including detector calibration uncertainty. This could be implemented via using a similar setup to RIFT which does importance sampling on the calibration weighted data. The code is found here (LIGO credentials required).

The idea right now is to add an optional transform to dingo.gw.inference.injection GWSignal._initialize_transform so that the multiplicative calibration envelope can be applied

https://git.ligo.org/ethan.payne/generic-calibration-reweighting

nihargupte-ph commented 1 year ago

Update:

The code that was doing the reweighting of RIFT samples calls bilby on the backend. Looking into this more, there is a Bilby function for this it is under bilby.gw.likelihood.base.GravitationalWaveTransient.log_likelihood_ratio. Here the noise-weighted inner product is calculated <d(f) | h(f)> where h is the waveform and d is the strain data. Except instead of just calculating

$$ \langle h(f), d(f)\rangle = 4 \Delta f \sum \frac{h^*(f) d(f)}{PSD(f)} $$

We get an array

$$ \langle h(f), d(f)\rangle_i = 4 \Delta f \sum \frac{h^*(f) d(f)}{PSD(f)} C_i(f)) $$

Here $C_i(f)$ are the calibration draws. They can be determined on an event-by-event basis. Then we can marginalize over calibration uncertainty by finding the log likelihood with calibration (bilby.gw.likelihood.base.GravitationalWaveTransient.calibration_marginalized_likelihood)

$$ \text{log} P(d | \theta, \text{calib}) = \Re (\frac{\langle d, h \rangle_i - \langle h, h\rangle_i}{2}) $$

Summing this over i (calibration draws) and dividing by the number of calibration draws would give us the likelihood marginalized over calibration.

I am wondering if it would be easier instead of creating a transform in dingo.gw.inference.injection.GWSignal._initialize_transform. To write a function in dingo.gw.likelihood similar to the time and phase marginalized likelihood functions which would generate the $C_i(f)$ and marginalize over them. Though I'm not sure how this would play with the phase marginalization...

In either of the cases, the first step would be to create a function that creates the $C_i(f)$ so I can work on that atm. Let me know though what method would fit better w/ dingo

nihargupte-ph commented 1 year ago

Update on this: I tried using the GW150914 Unconditional NDE and importance sampled with and without calibration uncertainty. Without calibration uncertainty I get ESS ~30% with calibration uncertainty I get ESS ~10%. This is for 10_000 points. Is this ok? If I set it to 100_000 points this goes to 16%.

nihargupte-ph commented 1 year ago

Update with latest PR which marginalizes over 100 response curves, we can get a sampling efficiency of 29.5% for GW150914 which sacrifices almost no performance to the 30% without calibration marginalization. Included here is a plot of GW150914 showing how the marginals change with and without importance sampling (no KDE).

output

Perhaps one last thing to check is to do this on an event with low SNR, but otherwise ready for merge.