Closed APMDSLHC closed 1 year ago
Hi @APMDSLHC, could you look into the source of the difference? like $\hat{\mu},\ \min(-\log\mathcal{L})$ etc. If these are the same it might be the brenq
.
Hi @APMDSLHC, please note that with the last updates, you will get different values for $\mu_{UL}$ between SModelS and MadStats. The legacy code used the wrong $CL_s$ values for expected statistical testing. Now we have three distinct expectation types, unified throughout all the backends, and each has well-defined differences.
System Settings
Fedora Linux 35 Python 3.9.12
Describe the bug
SModelS
ulcomputer.getUpperLimitOnMu()
and MadStatsstat_model.computeUpperLimitOnMu()
functions do not give exactly the same results. So far the maximum difference I found between the two outputs was around 3%, but it is possible that for other models it becomes larger. (The CLs seem to always be the same on the other hand.)To Reproduce
Expected behaviour
Same results from the two outputs.
Additional information
The patch and the signal yields were computed with SModelS (using the TChiWH_350_180_350_180.slha file) and I cannot attach it due to its file type. Same for the background only statistical model of the analysis (ATLAS-SUSY-2019-08).