SpeysideHEP / spey

Smooth inference for reinterpretation studies
https://spey.readthedocs.io
MIT License
7 stars 1 forks source link

division by zero #9

Closed WolfgangWaltenberger closed 1 year ago

WolfgangWaltenberger commented 1 year ago

Question

No response

WolfgangWaltenberger commented 1 year ago

in hypothesis_testing/test_statistics.py:219 when the numbers are negative, you clip them to zero. however, you then divide by the zero. if you anyhow clip them, perhaps it is wiser to sqrt_qmuA to a very small positive number?

WolfgangWaltenberger commented 1 year ago

ah sorry, didnt read the next line in the code

jackaraz commented 1 year ago

Hi @WolfgangWaltenberger, I'm not sure if it can lead to a bias because $\log\mathcal{L} - \log\mathcal{L}_{min}$ ratio can get very small, so I don't want to clip above that value also I'm afraid to lead to a numeric instability with too small values as well, for instance, I can set it to 1e-100 but not sure if it will create any problems within NumPy. I'm testing these options at the moment.

WolfgangWaltenberger commented 1 year ago

Will be testing also. My mindset would be, we want these pathological cases to rightly inform the bracketing and optimization algorithms. Like if the right end of the bracket is absurdly far right, the bracketing algorithm must be able to deduce this from the numbers. If the optimizer got lost in an extreme part of parameter space, the gradient must point it back into a sensible part of parameter space.

WolfgangWaltenberger commented 1 year ago

I cannot find your motivation for truedivide ( qmu - qmuA, 2.0 * sqrt_qmuA ), is this described in the CCGV paper? Anyways, if qmuA is zero, then we do not have to worry about the centrality parameter, evidently everything is nicely centralised. so we only need sqrt_qmu as the final test statistic, no?

jackaraz commented 1 year ago

Hi @WolfgangWaltenberger, this is eq. 66 in CCGV.