Open matthewfeickert opened 2 years ago
I don't understand how the first two results are the exact same, but if this is a toy vs non-toy issue with a 4 sigma effect (which, looking at the setup, would make sense roughly), are you sure you have enough toys to evaluate this (several 10k)?
are you sure you have enough toys to evaluate this (several 10k)?
Here I'm using the calculator API in a strange way as only 1 experiment is being evaluated, so there really isn't any pseudoexperiment generation happening. I should give a better example later.
I missed the sqrt required to go from q0 to the significance in the previous comment, this should be a 2 sigma effect of course: sqrt(2500) = 50, so a 100 event excess over background-only with negligible background uncertainty will be a 2 sigma effect. That agrees perfectly with the first two numbers.
The last two numbers scale with the number of signal events, so these are something else.
The last two numbers scale with the number of signal events, so these are something else.
yeah. The fact that the calculators are going in opposite directions as the signal increases is telling there's a problem.
At the moment this doesn't run any toys as far as I can tell, so I'm assuming this is a question about the calculators, and not about asymptotic vs toy agreement in general?
At the moment this doesn't run any toys as far as I can tell, so I'm assuming this is a question about the calculators, and not about asymptotic vs toy agreement in general?
Yes, I didn't phrase the original text clearly as I was dumping things in for myself to clarify later.
Okay @kratsg has pointed out that the behavior of the calculator APIs is (known to be) not consistent across the asymptotic and toy based as asymptotic is returning p-values (so cumulative distributions of test statistics)
while the toy based is returning q test stats. At the very least this needs to be made a lot more clear in the docs.
APIs is (known to be) not consistent across the asymptotic and toy based as asymptotic is returning p-values
well, it's consistent here, it's that the calc.teststatistic(test_poi)
has a different meaning which is translated by the respective calculator's calc.pvalues(teststat, ...)
call if that makes sense. The API is the same, but there's a few hoops/hurdles/threading to make it work.
well, it's consistent here, it's that the
calc.teststatistic(test_poi)
has a different meaning which is translated by the respective calculator'scalc.pvalues(teststat, ...)
call if that makes sense. The API is the same, but there's a few hoops/hurdles/threading to make it work.
While all very true, a user should rightly complain that the public API is too confusing as is (given the current documentation).
My main complaint at the moment is that while we make it clear that the asymptotic test stats are in -^mu/sigma
space
we don't make this clear again in the calculator teststatistic
API for the asymptotics calculator
or in the toy calculator (that it is different)
Also we could mention in EmpiricalDistribution
that the test stats are distributed in different spaces as well
So this really is a documentation issue, but a pretty big one in my mind.
I forgot this, and if I've written some of this code and can make this mistake when tired then I think a user definitely will.
My main complaint at the moment is that while we make it clear that the asymptotic test stats are in
-^mu/sigma
space
this part I think is what confuses me, so if you have a better grasp, it would be great to clarify this for the upcoming release.
This all came up as I was trying to take a stab at Issue #1712 and was trying to figure out to have things work for either calculator type.
Summary
There is a large discrepancy in the value of the discovery test statistic if it is generated via an Asymptotic based or toy based calculator.
Related Issues:
OS / Environment
Steps to Reproduce
File Upload (optional)
No response
Expected Results
The discovery test statistic would be the same regardless of calculator type.
Actual Results
pyhf Version
Code of Conduct