Open matthewfeickert opened 6 years ago
what we need for this is e.g. a setup where we can generate a) JSON specs and b) XML + ROOT files from a single source of truth
source of truth (could just be the JSON spec, either from disk or generating a bunch on-the fly)
/ \
XML + ROOT JSON spec (if needed)
| |
ROOT HF pyhf
| |
results.json results.json
\ /
validation by comparison
benchmarking
Given this I should probably complete Issue #3 first.
For future reference, a solution to the benchmark naming problems I was running into with pytest-benchmark
is to simply pass the --becnhmark-histogram
option last.
As pointed out in Issue #77, @vincecr0ft has made
WS_Builder
benchmarks at WorkSpaceBuilder. These need to be integrated into the pyhf benchmarking suite.First thoughts on steps: