Open pfackeldey opened 2 months ago
Hi, I considered a similar feature every now and again, but there didn't seem to be a good enough use case for putting in the time.
evermore sounds like an interesting library, I considered writing something like that myself (as I am also a fan of JAX).
If you have differentiable likelihoods in JAX, I don't see why you would need iminuit. You can use the optax minimizers and you can compute uncertainties as well, at least the analog to the HESSE algorithm in MINUIT. You need to compute the hessian at the minimum with JAX and invert it. If your original function has a negative log-likelihood, then this produces the covariance matrix of the parameters.
Hi @HDembinski,
thank you very much for your reply :)
Indeed it is possible to just use first order minimizer and to compute the hessian at the minimum with JAX and invert it. However, I am currently in the process of comparing evermore's features with similar tools. These tools always/only use Minuit for minimization, so a fair comparison of e.g. a likelihood profile between evermore
and these tools would be to use the same minimizer.
Apart from that I received general feedback that people like to use Minuit because of its robustness and its potential to reach the minimum faster than 1st order minimizer.
These two points are my main motivation to use iminuit
with evermore
.
Best, Peter
PS: But I agree fully with you... I personally had a pretty robust and fast experience with optax.sgd
so far - even for HEP-like fitting problems.
Regarding BarlowBeeston, I recommend to have a look at our new method if you are not already aware of it. It is implemented as the default in the class Template
and we also published a paper about it.
Dear iminuit developers,
Thank you very much for this great package!
I am the author of evermore - a pure JAX based package to build binned likelihoods in HEP. Here, one can construct arbitrary PyTrees of nuisances parameters and use them in their loss function. It is highly efficient to be able to group parameters into arrays to modify bin content in a vectorized fashion (especially for barlow-beeston[-lite]). Users have some parameters that are just single values (e.g. a single cross section uncertainty), and some that are represented as arrays (e.g. barlow-beeston statistical uncertainties).
Thus, I'd like to ask if it is possible to add the feature to use mixtures of different sized arrays (and floats), e.g.:
This is in particular handy when working with JAX loss functions where the parameters (
x
,c
) are often in practise a nested PyTree ofjax.Arrays
of arbitrary size:In this example
params
is just a simple dictionary, but this would also work with arbitrary (nested) PyTree structures ifiminuit
could support arrays of arbitrary size for the loss function kwargs.Best, Peter
PS: JAX optimisers, i.e.
optax
, can minimise directly w.r.t these PyTree structures. The minimiser returns the original PyTree structure, but its leafs contain the fitted parameter values. Here, one does not even need the step of thewrapped_fun
to convert any PyTree to a list of arguments.