iris-hep / project-milestones

IRIS-HEP project milestones
3 stars 0 forks source link

Translate analysis examples into new specifications, provide feedback, iterating as necessary #10

Open BenGalewsky opened 5 years ago

BenGalewsky commented 5 years ago

We take the analysis examples documented in #3, and start re-implementing them in our new specifications. We aim to identify limitations and issues throughout this process, and iterate on them. New implementations are guided by the requirements of the user stories developed in #1.

Assumptions

Acceptance criteria

cranmer commented 4 years ago
cranmer commented 4 years ago

In terms of which analysis we use for the template fit, we could also use ATLAS multi-b which satisfies the reinterpretation example.

alexander-held commented 4 years ago

Template based fit

The broad scope is to efficiently produce template histograms, post-process them and build a workspace. The workspace is then used for inference, and interfaces to visualization tools exist to implement the user stories described in #1. Instead of monolithic frameworks, we envision a modular approach with well-defined interfaces.

Production of template distributions

The FAST framework seems to be a good fit for the production of input template histograms. It is a declarative framework, which can provide the power of coffea without the user having to write dedicated code. I was in contact with a developer at CHEP2019, and they have already used FAST to produce inputs to the CMS Combine framework for statistical analysis. Two main points were identified that require further investigation:

The likelihood function

pyhf is the natural choice here, already shown to reproduce ROOT-based results in ATL-PHYS-PUB-2019-029 , see also CHEP2019 talk on pyhf from @matthewfeickert. One point of feedback identified is related to the functionality of providing expressions for normalization factors, supported by RooFit. An example RooFit workspace using this feature was built from inputs in alexander-held/template_fit_workflows, to iterate on how to approach this with pyhf. More workflow examples are detailed in a comment below.

matthewfeickert commented 4 years ago

In terms of which analysis we use for the template fit, we could also use ATLAS multi-b which satisfies the reinterpretation example.

As this is something that would be useful to have in addition to @alexander-held's thesis analysis (described above) it would be good to work with @kratsg on how to move forward for the multi-b. In addition to ongoing chats, @matthewfeickert and @kratsg will both be at US ATLAS Hadronic Final State Forum 2019 which offers a natural place to work on this.

matthewfeickert commented 4 years ago

Related to the visualization work and user stories that @alexander-held has been doing, there is also ongoing work in pyhf to add example plotting code and also establish (something along the lines of) pyhf.contrib.plotting for things like pull plots and ranking plots.

alexander-held commented 4 years ago

Template based fit

The repository alexander-held/template_fit_workflows illustrates three different approaches to the template fit workflow:

  1. "traditional approach" fully based on TRExFitter for template histogram production, workspace production and inference steering,
  2. parsing the xml workspace provided by TRExFitter with pyhf for subsequent inference within pyhf,
  3. fully python-based approach, building template histograms with FAST-HEP, constructing a workspace from them and inference with pyhf

The three approaches yield consistent results, and the repository contains an example for all three. This small implemented example is an important step towards more complex models. It led to the identification of several points that are now being followed up upon, via discussions in the IRIS-HEP Slack, the FAST-HEP gitter channel, and github issues. Relevant issues include requests for the creation of an extended example for parsing xmls with pyhf, support for parsing normalization factors from xmls, and pruning of nuisance parameters for model statistical uncertainties. For FAST-HEP, they include wildcard support for trees in ntuples and support for a different way to write information to YAML files.

Besides these points listed above, another point identified by this investigation is that very related information for configuration purposes is spread across multiple places. It would be easier to specify it in one central place. More specifically, the configuration of FAST-HEP requires information about which types of template histograms are needed and how to build them, while the subsequent workspace construction needs information about the relation between such histograms. One of the next steps is the adoption of a central configuration file, possibly similar to this TRExFitter example. Another similar example is @lukasheinrich's YAML based pyhfinput.

cranmer commented 4 years ago

CMS Higgs demo done with Kubernetes https://github.com/lukasheinrich/higgs-demo