gradhep / neos

Upstream optimisation for downstream inference
BSD 3-Clause "New" or "Revised" License
67 stars 5 forks source link

neos logo
neural end-to-end-optimised summary statistics
arxiv.org/abs/2203.05570
GitHub Workflow Status Zenodo DOI Binder

About

Leverages the shoulders of giants (jax and pyhf) to differentiate through a high-energy physics analysis workflow, including the construction of the frequentist profile likelihood.

If you're more of a video person, see this talk given by Nathan on the broader topic of differentiable programming in high-energy physics, which also covers neos.

You want to apply this to your analysis?

Some things need to happen first. Click here for more info -- I wrote them up!

Have questions?

Do you want to chat about neos? Join us in Mattermost: Mattermost

Cite

Please cite our newly released paper:

@article{neos,
    Author = {Nathan Simpson and Lukas Heinrich},
    Title = {neos: End-to-End-Optimised Summary Statistics for High Energy Physics},
    Year = {2022},
    Eprint = {arXiv:2203.05570},
    doi = {10.48550/arXiv.2203.05570},
    url = {https://doi.org/10.48550/arXiv.2203.05570}
}

Example usage -- train a neural network to optimize an expected p-value

setup

In a python 3 environment, run the following:

pip install --upgrade pip setuptools wheel
pip install neos
pip install git+http://github.com/scikit-hep/pyhf.git@make_difffable_model_ctor

With this, you should be able to run the demo notebook demo.ipynb on your pc :)

This workflow is as follows:

This counts as one forward pass of the workflow -- we then optimize the neural network by gradient descent, backpropagating through the whole analysis!

Thanks

A big thanks to the teams behind jax, fax, jaxopt and pyhf for their software and support.