-
# Description
As outlined in the [command line options](http://pytest-benchmark.readthedocs.io/en/stable/usage.html#commandline-options) for `--benchmark-histogram`
>--benchmark-histogram=FILENAME…
-
# Description
I would have expected this to work
```
spec = {
'channels': [
{
'name': 'singlechannel',
'samples': [
{
…
-
# Description
The current way of importing backends is a little redundant:
```python
from pyhf.tensor.numpy_backend import numpy_backend
from pyhf.tensor.pytorch_backend import pytorch_backend…
-
# Description
we want to be able to do e.g. `tf.Session().run(qmu)`
-
Is it possible to plot normalised overlayed histograms using something like:
```python
histogram.overlay("x").marker("y", error=True, normed=True).to(canvas)
histogram.overlay("x").normalize().ma…
-
# Description
If using a backend other than `numpy_backend` currently we have to manually set the optimizer. However, this should be done automatically when the backend is changed.
Otherwise thi…
-
should be mostly straight forward but requires additional bookkeeping which samples participate. Essentially one additional constraint term per bin
-
When benchmarking the performance of the TensorFlow backend and optimizer, the performance decreases (run time of test increases) with the number of iterations performed. This should not be happening …
-
we should be able to run something like `hist2workspace` but such that it dumps a JSON which can be fed to the `pyhf.hfpdf` ctor
-
in yadage, we have a travis deploy job that automatically builds docs and pushes to gh-pages
https://github.com/diana-hep/yadage/blob/master/.travis.yml#L25
- [x] Setup CI
- [x] Add automodule …