scikit-hep / pyhf

pure-Python HistFactory implementation with tensors and autodiff
https://pyhf.readthedocs.io/
Apache License 2.0
281 stars 83 forks source link

toms748 failed with FailedMinimization or ValueError #2232

Open takanesano opened 1 year ago

takanesano commented 1 year ago

Summary

When I try to get upper_limit, toms748 returns the error FailedMinimization: Inequality constraints incompatible or ValueError: Invalid function value: f(9.500000) -> nan.

OS / Environment

NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="http://cern.ch/linux/"
BUG_REPORT_URL="http://cern.ch/linux/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

Steps to Reproduce

import numpy as np

import pyhf
from pyhf.contrib.viz import brazil

bkg = [11153054.0, 5122485.5, 1612950.8, 623655.0, 288350.78, 133780.98, 68429.08, 29384.553, 16960.21, 7732.061]
bkg_uncertainty = [2230610.8, 1024497.1, 322590.16, 124731.0, 57670.156, 26756.197, 13685.815, 5876.9106, 3392.0422, 1546.4122]

sig1 = [1868621.6, 524542.7, 147379.1, 47125.387, 17630.66, 6192.0015, 2552.428, 1039.878, 567.20624, 94.53437] 
sig2 = [231463.62, 190822.11, 72307.73, 24744.96, 9013.512, 3147.5251, 1226.3401, 441.045, 175.23001, 120.825005]

pyhf.set_backend("numpy")

model = pyhf.simplemodels.uncorrelated_background(
    signal=sig1, bkg=bkg, bkg_uncertainty=bkg_uncertainty # To reproduce FailedMinimization
    #signal=sig2, bkg=bkg, bkg_uncertainty=bkg_uncertainty # To reproduce ValueError
)

par_bounds = model.config.suggested_bounds()
par_bounds[model.config.poi_index] = (0,500000)
init_pars = model.config.suggested_init()
init_pars[model.config.poi_index] = 0.

data = pyhf.tensorlib.astensor(bkg + model.config.auxdata)

obs_limit, exp_limit_and_bands = pyhf.infer.intervals.upper_limits.upper_limit( data, model, par_bounds=par_bounds, init_pars=init_pars )

File Upload (optional)

No response

Expected Results

Expected to go through and return obs_limit.

Actual Results

# In case of FailedMinimization: Inequality constraints incompatible

message: Inequality constraints incompatible
 success: False
  status: 4
     fun: 3270213.8682882776
       x: [ 9.500e+00  9.725e-03  9.725e-03  1.000e-10  1.000e-10
            1.000e-10  1.000e-10  1.000e-10  1.000e-10  1.000e-10
            1.000e-10]
     nit: 3
     jac: [ 1.244e+06  8.372e+06 -1.873e+05 -1.681e+10 -1.681e+10
           -1.681e+10 -1.681e+10 -1.681e+10 -1.681e+10 -1.681e+10
           -1.681e+10]
    nfev: 46
    njev: 3
Traceback (most recent call last):
  File "/cvmfs/sft-nightlies.cern.ch/lcg/views/dev4cuda/Thu/x86_64-centos7-gcc11-opt/lib/python3.9/site-packages/pyhf/optimize/mixins.py", line 63, in _internal_minimize
    assert result.success
AssertionError
---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
/cvmfs/sft-nightlies.cern.ch/lcg/views/dev4cuda/Thu/x86_64-centos7-gcc11-opt/lib/python3.9/site-packages/pyhf/optimize/mixins.py in _internal_minimize(self, func, x0, do_grad, bounds, fixed_vals, options, par_names)
     62         try:
---> 63             assert result.success
     64         except AssertionError:

AssertionError: 

During handling of the above exception, another exception occurred:

FailedMinimization                        Traceback (most recent call last)
/tmp/ipykernel_64510/3787508766.py in <module>
     12 data = pyhf.tensorlib.astensor(bkg + model.config.auxdata)
     13 
---> 14 obs_limit, exp_limit_and_bands = pyhf.infer.intervals.upper_limits.upper_limit( data, model, par_bounds=par_bounds, init_pars=init_pars )

/cvmfs/sft-nightlies.cern.ch/lcg/views/dev4cuda/Thu/x86_64-centos7-gcc11-opt/lib/python3.9/site-packages/pyhf/infer/intervals/upper_limits.py in upper_limit(data, model, scan, level, return_results, **hypotest_kwargs)
    257         model.config.par_slice(model.config.poi_name).start
    258     ]
--> 259     obs_limit, exp_limit, results = toms748_scan(
    260         data,
    261         model,

/cvmfs/sft-nightlies.cern.ch/lcg/views/dev4cuda/Thu/x86_64-centos7-gcc11-opt/lib/python3.9/site-packages/pyhf/infer/intervals/upper_limits.py in toms748_scan(data, model, bounds_low, bounds_up, level, atol, rtol, from_upper_limit_fn, **hypotest_kwargs)
    128     tb, _ = get_backend()
    129     obs = tb.astensor(
--> 130         toms748(f, bounds_low, bounds_up, args=(level, 0), k=2, xtol=atol, rtol=rtol)
    131     )
    132     exp = [

/cvmfs/sft-nightlies.cern.ch/lcg/views/dev4cuda/Thu/x86_64-centos7-gcc11-opt/lib/python3.9/site-packages/scipy/optimize/_zeros_py.py in toms748(f, a, b, args, k, xtol, rtol, maxiter, full_output, disp)
   1372         args = (args,)
   1373     solver = TOMS748Solver()
-> 1374     result = solver.solve(f, a, b, args=args, k=k, xtol=xtol, rtol=rtol,
   1375                           maxiter=maxiter, disp=disp)
   1376     x, function_calls, iterations, flag = result

/cvmfs/sft-nightlies.cern.ch/lcg/views/dev4cuda/Thu/x86_64-centos7-gcc11-opt/lib/python3.9/site-packages/scipy/optimize/_zeros_py.py in solve(self, f, a, b, args, xtol, rtol, k, maxiter, disp)
   1227         if not self.ab[0] < c < self.ab[1]:
   1228             c = sum(self.ab) / 2.0
-> 1229         fc = self._callf(c)
   1230         if fc == 0:
   1231             return self.get_result(c)

/cvmfs/sft-nightlies.cern.ch/lcg/views/dev4cuda/Thu/x86_64-centos7-gcc11-opt/lib/python3.9/site-packages/scipy/optimize/_zeros_py.py in _callf(self, x, error)
   1081     def _callf(self, x, error=True):
   1082         """Call the user-supplied function, update book-keeping"""
-> 1083         fx = self.f(x, *self.args)
   1084         self.function_calls += 1
   1085         if not np.isfinite(fx) and error:

/cvmfs/sft-nightlies.cern.ch/lcg/views/dev4cuda/Thu/x86_64-centos7-gcc11-opt/lib/python3.9/site-packages/pyhf/infer/intervals/upper_limits.py in f(poi, level, limit)
     93         # else: expected
     94         return (
---> 95             f_cached(poi)[0] - level
     96             if limit == 0
     97             else f_cached(poi)[1][limit - 1] - level

/cvmfs/sft-nightlies.cern.ch/lcg/views/dev4cuda/Thu/x86_64-centos7-gcc11-opt/lib/python3.9/site-packages/pyhf/infer/intervals/upper_limits.py in f_cached(poi)
     79     def f_cached(poi):
     80         if poi not in cache:
---> 81             cache[poi] = hypotest(
     82                 poi,
     83                 data,

/cvmfs/sft-nightlies.cern.ch/lcg/views/dev4cuda/Thu/x86_64-centos7-gcc11-opt/lib/python3.9/site-packages/pyhf/infer/__init__.py in hypotest(poi_test, data, pdf, init_pars, par_bounds, fixed_params, calctype, return_tail_probs, return_expected, return_expected_set, return_calculator, **kwargs)
    167     )
    168 
--> 169     teststat = calc.teststatistic(poi_test)
    170     sig_plus_bkg_distribution, bkg_only_distribution = calc.distributions(poi_test)
    171 

/cvmfs/sft-nightlies.cern.ch/lcg/views/dev4cuda/Thu/x86_64-centos7-gcc11-opt/lib/python3.9/site-packages/pyhf/infer/calculators.py in teststatistic(self, poi_test)
    365         teststat_func = utils.get_test_stat(self.test_stat)
    366 
--> 367         qmu_v, (mubhathat, muhatbhat) = teststat_func(
    368             poi_test,
    369             self.data,

/cvmfs/sft-nightlies.cern.ch/lcg/views/dev4cuda/Thu/x86_64-centos7-gcc11-opt/lib/python3.9/site-packages/pyhf/infer/test_statistics.py in qmu_tilde(mu, data, pdf, init_pars, par_bounds, fixed_params, return_fitted_pars)
    232             + 'Use the qmu test statistic (pyhf.infer.test_statistics.qmu) instead.'
    233         )
--> 234     return _qmu_like(
    235         mu,
    236         data,

/cvmfs/sft-nightlies.cern.ch/lcg/views/dev4cuda/Thu/x86_64-centos7-gcc11-opt/lib/python3.9/site-packages/pyhf/infer/test_statistics.py in _qmu_like(mu, data, pdf, init_pars, par_bounds, fixed_params, return_fitted_pars)
     25     """
     26     tensorlib, optimizer = get_backend()
---> 27     tmu_like_stat, (mubhathat, muhatbhat) = _tmu_like(
     28         mu, data, pdf, init_pars, par_bounds, fixed_params, return_fitted_pars=True
     29     )

/cvmfs/sft-nightlies.cern.ch/lcg/views/dev4cuda/Thu/x86_64-centos7-gcc11-opt/lib/python3.9/site-packages/pyhf/infer/test_statistics.py in _tmu_like(mu, data, pdf, init_pars, par_bounds, fixed_params, return_fitted_pars)
     46     """
     47     tensorlib, optimizer = get_backend()
---> 48     mubhathat, fixed_poi_fit_lhood_val = fixed_poi_fit(
     49         mu, data, pdf, init_pars, par_bounds, fixed_params, return_fitted_val=True
     50     )

/cvmfs/sft-nightlies.cern.ch/lcg/views/dev4cuda/Thu/x86_64-centos7-gcc11-opt/lib/python3.9/site-packages/pyhf/infer/mle.py in fixed_poi_fit(poi_val, data, pdf, init_pars, par_bounds, fixed_params, **kwargs)
    200     fixed_params[pdf.config.poi_index] = True
    201 
--> 202     return fit(data, pdf, init_pars, par_bounds, fixed_params, **kwargs)

/cvmfs/sft-nightlies.cern.ch/lcg/views/dev4cuda/Thu/x86_64-centos7-gcc11-opt/lib/python3.9/site-packages/pyhf/infer/mle.py in fit(data, pdf, init_pars, par_bounds, fixed_params, **kwargs)
    129     ]
    130 
--> 131     return opt.minimize(
    132         twice_nll, data, pdf, init_pars, par_bounds, fixed_vals, **kwargs
    133     )

/cvmfs/sft-nightlies.cern.ch/lcg/views/dev4cuda/Thu/x86_64-centos7-gcc11-opt/lib/python3.9/site-packages/pyhf/optimize/mixins.py in minimize(self, objective, data, pdf, init_pars, par_bounds, fixed_vals, return_fitted_val, return_result_obj, return_uncertainties, return_correlations, do_grad, do_stitch, **kwargs)
    191             par_names = [name for name in par_names if name]
    192 
--> 193         result = self._internal_minimize(
    194             **minimizer_kwargs, options=kwargs, par_names=par_names
    195         )

/cvmfs/sft-nightlies.cern.ch/lcg/views/dev4cuda/Thu/x86_64-centos7-gcc11-opt/lib/python3.9/site-packages/pyhf/optimize/mixins.py in _internal_minimize(self, func, x0, do_grad, bounds, fixed_vals, options, par_names)
     64         except AssertionError:
     65             log.error(result, exc_info=True)
---> 66             raise exceptions.FailedMinimization(result)
     67         return result
     68 

FailedMinimization: Inequality constraints incompatible

# In case of Value error

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
/tmp/ipykernel_64510/2529070479.py in <module>
     12 data = pyhf.tensorlib.astensor(bkg + model.config.auxdata)
     13 
---> 14 obs_limit, exp_limit_and_bands = pyhf.infer.intervals.upper_limits.upper_limit( data, model, par_bounds=par_bounds, init_pars=init_pars )

/cvmfs/sft-nightlies.cern.ch/lcg/views/dev4cuda/Thu/x86_64-centos7-gcc11-opt/lib/python3.9/site-packages/pyhf/infer/intervals/upper_limits.py in upper_limit(data, model, scan, level, return_results, **hypotest_kwargs)
    257         model.config.par_slice(model.config.poi_name).start
    258     ]
--> 259     obs_limit, exp_limit, results = toms748_scan(
    260         data,
    261         model,

/cvmfs/sft-nightlies.cern.ch/lcg/views/dev4cuda/Thu/x86_64-centos7-gcc11-opt/lib/python3.9/site-packages/pyhf/infer/intervals/upper_limits.py in toms748_scan(data, model, bounds_low, bounds_up, level, atol, rtol, from_upper_limit_fn, **hypotest_kwargs)
    128     tb, _ = get_backend()
    129     obs = tb.astensor(
--> 130         toms748(f, bounds_low, bounds_up, args=(level, 0), k=2, xtol=atol, rtol=rtol)
    131     )
    132     exp = [

/cvmfs/sft-nightlies.cern.ch/lcg/views/dev4cuda/Thu/x86_64-centos7-gcc11-opt/lib/python3.9/site-packages/scipy/optimize/_zeros_py.py in toms748(f, a, b, args, k, xtol, rtol, maxiter, full_output, disp)
   1372         args = (args,)
   1373     solver = TOMS748Solver()
-> 1374     result = solver.solve(f, a, b, args=args, k=k, xtol=xtol, rtol=rtol,
   1375                           maxiter=maxiter, disp=disp)
   1376     x, function_calls, iterations, flag = result

/cvmfs/sft-nightlies.cern.ch/lcg/views/dev4cuda/Thu/x86_64-centos7-gcc11-opt/lib/python3.9/site-packages/scipy/optimize/_zeros_py.py in solve(self, f, a, b, args, xtol, rtol, k, maxiter, disp)
   1227         if not self.ab[0] < c < self.ab[1]:
   1228             c = sum(self.ab) / 2.0
-> 1229         fc = self._callf(c)
   1230         if fc == 0:
   1231             return self.get_result(c)

/cvmfs/sft-nightlies.cern.ch/lcg/views/dev4cuda/Thu/x86_64-centos7-gcc11-opt/lib/python3.9/site-packages/scipy/optimize/_zeros_py.py in _callf(self, x, error)
   1084         self.function_calls += 1
   1085         if not np.isfinite(fx) and error:
-> 1086             raise ValueError("Invalid function value: f(%f) -> %s " % (x, fx))
   1087         return fx
   1088 

ValueError: Invalid function value: f(9.500000) -> nan

pyhf Version

pyhf, version 0.7.2

Code of Conduct

kratsg commented 1 year ago

par_bounds[model.config.poi_index] = (0,500000)

you might want to rescale your signal and background so your POI bounds aren't as large as they are. These sorts of fits breakdown if you're trying to do larger orders of magnitude.

takanesano commented 1 year ago

Thank you for the suggestion. Can you teach me more in detail how to rescale sig/bkg to avoid this? Does it simply mean multiplying some small number to all sig, bkg, data?

matthewfeickert commented 1 year ago

Can you teach me more in detail how to rescale sig/bkg to avoid this?

@takanesano I haven't had time to look at this Issue in full yet, but to address this last question now, while I think(?) we have a GitHub Discussion somewhere on this here's at least an old example from Stack Overflow (from before we transitioned questions to GitHub Discussions): Fit convergence failure in pyhf for small signal model (which was also Issue #762) . The version of pyhf used in that example is old and so the APIs differ from v0.7.x but I think it demonstrates what you need.

takanesano commented 1 year ago

@matthewfeickert Thank you so much for the information! Rescaling the signal with a factor of 1e-2 worked for the case of sig1. My next problem is when trying to find a scale factor that makes it work, the window of capable factors is too small.

The codes below are the example.

import numpy as np

import pyhf
from pyhf.contrib.viz import brazil

bkg = [11153054.0, 5122485.5, 1612950.8, 623655.0, 288350.78, 133780.98, 68429.08, 29384.553, 16960.21, 7732.061]
bkg_uncertainty = [2230610.8, 1024497.1, 322590.16, 124731.0, 57670.156, 26756.197, 13685.815, 5876.9106, 3392.0422, 1546.4122]

sig3 = [887.22363, 4129.3247, 9386.207, 6953.6787, 2835.5503, 1102.8623, 437.77795, 196.58675, 81.46747, 30.449577] 

pyhf.set_backend("numpy")

rescale = 5.
sig = list(np.array(sig1)*rescale)

model = pyhf.simplemodels.uncorrelated_background(
    signal=sig3, bkg=bkg, bkg_uncertainty=bkg_uncertainty
)

par_bounds = model.config.suggested_bounds()
par_bounds[model.config.poi_index] = (0,100)
init_pars = model.config.suggested_init()
init_pars[model.config.poi_index] = 0.

data = pyhf.tensorlib.astensor(bkg + model.config.auxdata)

obs_limit, exp_limit_and_bands = pyhf.infer.intervals.upper_limits.upper_limit( data, model, par_bounds=par_bounds, init_pars=init_pars )

This returns

ValueError: Invalid function value: f(10.000000) -> nan

As Fit convergence failure in pyhf for small signal model suggests, I've checked CLs for µ=1 with the scale factor I'm using here.

Observed CLs for µ=1: 0.5915001216
-----
Expected (-2 σ) CLs for µ=1: 0.246
Expected (-1 σ) CLs for µ=1: 0.392
Expected        CLs for µ=1: 0.592
Expected (1 σ) CLs for µ=1: 0.806
Expected (2 σ) CLs for µ=1: 0.950

I think this is a reasonable value but I couldn't find any working scale factors while scanning for rescale in range(1., 10., 1.).

alexander-held commented 1 year ago

This is an interesting example where minimization can fail for what looks like a rather simple model. I simplified this a bit further down to two bins:

import numpy as np
import pyhf

bkg = [5000000.0, 1612950.8]
bkg_uncertainty = [1000000.1, 322590.16]
sig = [4000.0, 9386.207]

model = pyhf.simplemodels.uncorrelated_background(
    signal=sig, bkg=bkg, bkg_uncertainty=bkg_uncertainty
)

par_bounds = model.config.suggested_bounds()
par_bounds[model.config.poi_index] = (0, 100)

data = bkg + model.config.auxdata

# hypotest
res = pyhf.infer.hypotest(11.843640283180768, data, model, par_bounds=par_bounds)
print(np.isnan(res))  # NaN with scipy, Minuit gives non-NaN

res = pyhf.infer.hypotest(11.843640, data, model, par_bounds=par_bounds)
print(np.isnan(res))  # not NaN with scipy

# maximum likelihood estimates
pyhf.set_backend("numpy", "minuit")
# default Minuit is far off from the minimum (POI=0 is the MLE here)
print(pyhf.infer.mle.fit(data, model, par_bounds=par_bounds))
# tuning helps, but cannot lower tolerance further (cannot reach EDM threshold)
print(pyhf.infer.mle.fit(data, model,  par_bounds=par_bounds, tolerance=1e-5, strategy=2))

output:

[...]/pyhf/src/pyhf/infer/calculators.py:467: RuntimeWarning: invalid value encountered in divide
  CLs = tensorlib.astensor(CLsb / CLb)
True
False
[0.62423955 0.99950048 0.99636748]
[0.03494499 0.99997209 0.99979678]

Some observations:

Different initial parameter settings might help here, I imagine the difficulty might be related to the mu=1 initial setting already providing a very good fit to begin with. Another issue is that the best-fit value for mu is at a boundary, relaxing the boundary also helps a bit but does not seem sufficient either.

alexander-held commented 1 year ago

From some further experimentation I think the main problem is that the POI bounds are too small given the sensitivity and that the default tolerance is too high. The attached example sets the bounds to [-500, 500] and the Hessian estimate is still very bad. This is using Minuit and even with strategy=2 I cannot push the tolerance low enough to reliably get convergence depending on the initial parameter settings (the scan is easier since the POI is fixed).

scan_mu