scikit-hep / pyhf

pure-Python HistFactory implementation with tensors and autodiff
https://pyhf.readthedocs.io/
Apache License 2.0
285 stars 84 forks source link

Additional documentation needed on difference between calculator type test statistic return type #1792

Open matthewfeickert opened 2 years ago

matthewfeickert commented 2 years ago

Summary

There is a large discrepancy in the value of the discovery test statistic if it is generated via an Asymptotic based or toy based calculator.

Related Issues:

OS / Environment

$ cat /etc/os-release 
NAME="Ubuntu"
VERSION="20.04.4 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.4 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal

Steps to Reproduce

# issue_1792.py
import pyhf

def discovery_from_test_stat(model, data):
    init_pars = model.config.suggested_init()
    par_bounds = model.config.suggested_bounds()
    fixed_params = model.config.suggested_fixed()
    return pyhf.infer.test_statistics.q0(
        0.0,
        data,
        model,
        init_pars,
        par_bounds,
        fixed_params,
    )

def discovery_from_calculator(model, data, calc_type="asymptotics", **kwargs):
    calculator = pyhf.infer.utils.create_calculator(
        calc_type, data, model, test_stat="q0", **kwargs
    )
    return calculator.teststatistic(0.0)

def discovery_from_asymptotic_calculator(model, data):
    calculator = pyhf.infer.calculators.AsymptoticCalculator(
        data, model, test_stat="q0"
    )
    return calculator.teststatistic(0.0)

if __name__ == "__main__":
    model = pyhf.simplemodels.uncorrelated_background([25], [2500], [2.5])
    data = [2600] + model.config.auxdata

    discovery_test_stat = discovery_from_test_stat(model, data)
    print(f"Discovery test stat from infer.test_statistics.q0: {discovery_test_stat}")

    discovery_test_stat = discovery_from_calculator(model, data, calc_type="toybased")
    print(
        f"Discovery test stat from toy based infer.utils.create_calculator: {discovery_test_stat}"
    )

    discovery_test_stat = discovery_from_calculator(model, data)
    print(
        f"Discovery test stat from asymptotics infer.utils.create_calculator: {discovery_test_stat}"
    )

    discovery_test_stat = discovery_from_asymptotic_calculator(model, data)
    print(
        f"Discovery test stat from infer.calculators.AsymptoticCalculator: {discovery_test_stat}"
    )

File Upload (optional)

No response

Expected Results

The discovery test statistic would be the same regardless of calculator type.

Actual Results

$ python issue_1792.py 
Discovery test stat from infer.test_statistics.q0: 3.9377335959507036
Discovery test stat from toy based infer.utils.create_calculator: 3.9377335959507036
Discovery test stat from asymptotics infer.utils.create_calculator: 1.485845489706193
Discovery test stat from infer.calculators.AsymptoticCalculator: 1.485845489706193

pyhf Version

9fd99be886349a90e927672e950cc233fad0916c on master

Code of Conduct

alexander-held commented 2 years ago

I don't understand how the first two results are the exact same, but if this is a toy vs non-toy issue with a 4 sigma effect (which, looking at the setup, would make sense roughly), are you sure you have enough toys to evaluate this (several 10k)?

matthewfeickert commented 2 years ago

are you sure you have enough toys to evaluate this (several 10k)?

Here I'm using the calculator API in a strange way as only 1 experiment is being evaluated, so there really isn't any pseudoexperiment generation happening. I should give a better example later.

https://github.com/scikit-hep/pyhf/blob/9fd99be886349a90e927672e950cc233fad0916c/src/pyhf/infer/calculators.py#L951-L960

alexander-held commented 2 years ago

I missed the sqrt required to go from q0 to the significance in the previous comment, this should be a 2 sigma effect of course: sqrt(2500) = 50, so a 100 event excess over background-only with negligible background uncertainty will be a 2 sigma effect. That agrees perfectly with the first two numbers.

alexander-held commented 2 years ago

The last two numbers scale with the number of signal events, so these are something else.

matthewfeickert commented 2 years ago

The last two numbers scale with the number of signal events, so these are something else.

yeah. The fact that the calculators are going in opposite directions as the signal increases is telling there's a problem.

alexander-held commented 2 years ago

At the moment this doesn't run any toys as far as I can tell, so I'm assuming this is a question about the calculators, and not about asymptotic vs toy agreement in general?

matthewfeickert commented 2 years ago

At the moment this doesn't run any toys as far as I can tell, so I'm assuming this is a question about the calculators, and not about asymptotic vs toy agreement in general?

Yes, I didn't phrase the original text clearly as I was dumping things in for myself to clarify later.

matthewfeickert commented 2 years ago

Okay @kratsg has pointed out that the behavior of the calculator APIs is (known to be) not consistent across the asymptotic and toy based as asymptotic is returning p-values (so cumulative distributions of test statistics)

https://github.com/scikit-hep/pyhf/blob/9fd99be886349a90e927672e950cc233fad0916c/src/pyhf/infer/__init__.py#L169-L178

while the toy based is returning q test stats. At the very least this needs to be made a lot more clear in the docs.

kratsg commented 2 years ago

APIs is (known to be) not consistent across the asymptotic and toy based as asymptotic is returning p-values

well, it's consistent here, it's that the calc.teststatistic(test_poi) has a different meaning which is translated by the respective calculator's calc.pvalues(teststat, ...) call if that makes sense. The API is the same, but there's a few hoops/hurdles/threading to make it work.

matthewfeickert commented 2 years ago

well, it's consistent here, it's that the calc.teststatistic(test_poi) has a different meaning which is translated by the respective calculator's calc.pvalues(teststat, ...) call if that makes sense. The API is the same, but there's a few hoops/hurdles/threading to make it work.

While all very true, a user should rightly complain that the public API is too confusing as is (given the current documentation).

matthewfeickert commented 2 years ago

My main complaint at the moment is that while we make it clear that the asymptotic test stats are in -^mu/sigma space

https://github.com/scikit-hep/pyhf/blob/9fd99be886349a90e927672e950cc233fad0916c/src/pyhf/infer/calculators.py#L85-L89

we don't make this clear again in the calculator teststatistic API for the asymptotics calculator

https://github.com/scikit-hep/pyhf/blob/9fd99be886349a90e927672e950cc233fad0916c/src/pyhf/infer/calculators.py#L330-L332

or in the toy calculator (that it is different)

https://github.com/scikit-hep/pyhf/blob/9fd99be886349a90e927672e950cc233fad0916c/src/pyhf/infer/calculators.py#L922-L924

Also we could mention in EmpiricalDistribution that the test stats are distributed in different spaces as well

https://github.com/scikit-hep/pyhf/blob/9fd99be886349a90e927672e950cc233fad0916c/src/pyhf/infer/calculators.py#L526-L532

So this really is a documentation issue, but a pretty big one in my mind.

I forgot this, and if I've written some of this code and can make this mistake when tired then I think a user definitely will.

kratsg commented 2 years ago

My main complaint at the moment is that while we make it clear that the asymptotic test stats are in -^mu/sigma space

this part I think is what confuses me, so if you have a better grasp, it would be great to clarify this for the upcoming release.

matthewfeickert commented 2 years ago

This all came up as I was trying to take a stab at Issue #1712 and was trying to figure out to have things work for either calculator type.