Closed APMDSLHC closed 1 year ago
Hi @APMDSLHC, I modified the code but left the warnings on for now. So it should warn you still, but you should still get a result. If you like, you can change the default backend-specific settings in the combiner e.g.
import madstats
kwargs = {str(madstats.AvailableBackends.pyhf): {"iteration_threshold": 20}}
muhat_apri, nll_min_apri = combiner.maximize_likelihood(
return_nll=True, allow_negative_signal=True, expected=madstats.ExpectationType.apriori, **kwargs
)
You can check the appropriate function's keywords to see what you can modify via kwargs e.g.
madstats.backends.pyhf_backend.interface.PyhfInterface.likelihood?
# Out:
Signature:
madstats.backends.pyhf_backend.interface.PyhfInterface.likelihood(
self,
mu: Optional[float] = 1.0,
expected: Optional[madstats.utils.ExpectationType] = observed,
allow_negative_signal: bool = True,
return_nll: Optional[bool] = False,
isAsimov: Optional[bool] = False,
iteration_threshold: Optional[int] = 10,
) -> float
Docstring:
Compute the likelihood of the given statistical model
:param mu: POI (signal strength)
:param expected: observed, expected (true, apriori) or aposteriori
:param allow_negative_signal: if true, POI can get negative values
:param return_nll: if true returns negative log-likelihood value
:param isAsimov: if true, computes likelihood for Asimov data
:param mu_lim: boundaries for mu
:param iteration_threshold: number of iterations to be held for convergence of the fit.
:return: (float) likelihood
Type: function
Thanks, but I now have infinities for some values of mu.
When I do:
with open("./madstats/test/signal_test.json", "r") as f:
signal = json.load(f)
with open("./madstats/test/background_test.json", "r") as f:
background = json.load(f)
stat_model = madstats.get_multi_region_statistical_model(
analysis="atlas_susy_2018_31",
signal=signal,
background=background,
xsection=0.000207244,
)
mu = np.linspace(-3,2,25)
nll_apriori = np.array([stat_model.likelihood(mu=m, return_nll=True, allow_negative_signal=True, expected=madstats.ExpectationType.apriori) for m in mu])
nll_observed = np.array([stat_model.likelihood(mu=m, return_nll=True, allow_negative_signal=True, expected=madstats.ExpectationType.observed) for m in mu])
muhat_obs, nll_min_obs = stat_model.maximize_likelihood(return_nll=True, allow_negative_signal=True, expected=madstats.ExpectationType.observed)
muhat_apri, nll_min_apri = stat_model.maximize_likelihood(return_nll=True, allow_negative_signal=True, expected=madstats.ExpectationType.apriori)
print(nll_apriori)
print(nll_observed)
I get:
[ inf 358.90796301 inf 2007.40776346
inf 8467.43623774 inf 22949.41643341
inf 37471.86283972 inf 22277.897611
inf 3250.62924497 inf inf
inf inf inf inf
inf inf inf inf
inf]
[8.48776799e+02 inf 5.87005090e+03 inf
3.03793615e+04 inf 1.12910734e+05 inf
2.32123477e+05 inf 2.39378418e+05 inf
8.47638456e+04 inf 1.91178743e+03 4.84127450e+01
4.84682588e+01 4.85294076e+01 4.85958162e+01 4.86671141e+01
4.87429484e+01 4.88229917e+01 4.89069402e+01 4.89945167e+01
4.90854675e+01]
It gives the attached plot.
Hi @APMDSLHC, that is because I set it to inf
if it can not converge to a proper value within a given number of iterations. Could you try increasing the iteration_threshold
? If that doesn't fix it we need to think of a better solution.
Actually, I think I found the problem Tim; please try with the latest push.
Yep, seems to work fine, thanks.
Thanks Tim, closing the issue.
System Settings
Fedora Linux 35 Python 3.9.12
Describe the bug
Optimizer in combiner.maximize_likelihood() can't reach the required precision.
Sometimes it crashes, sometimes it doesn't.
To Reproduce
Expected behaviour
Finding the parameters that maximise the likelihood.
Additional information
numpy version 1.21.6 scipy version 1.8.0