Closed whedon closed 3 years ago
Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @suchitakulkarni, @bradkav it looks like you're currently assigned to review this paper :tada:.
:warning: JOSS reduced service mode :warning:
Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.
:star: Important :star:
If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿
To fix this do the following two things:
For a list of things I can do to help you, just type:
@whedon commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@whedon generate pdf
PDF failed to compile for issue #2823 with the following error:
Can't find any papers to compile :-(
@whedon generate pdf from branch docs/add-JOSS-paper
Attempting PDF compilation from custom branch docs/add-JOSS-paper. Reticulating splines etc...
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Hi @eloisabentivegna, it looks like the invitation at this URL - https://github.com/openjournals/joss-reviews/invitations - has expired (sorry I was a bit slow getting around to it). Could the invitation be sent again? Thanks.
@whedon re-invite @bradkav as reviewer
OK, the reviewer has been re-invited.
@bradkav please accept the invite by clicking this link: https://github.com/openjournals/joss-reviews/invitations
@suchitakulkarni, @bradkav, were you able to start the review process, e.g. downloading pyhf
? Could you post an update?
@eloisabentivegna @suchitakulkarni @bradkav, in the time since we first submitted pyhf
to JOSS pre-review we've also released patch release v0.5.3
and are about to release v0.5.4
. Do you want us to rebase the branch that has the JOSS paper in with these number updated, or would you prefer everything remain strictly frozen at v0.5.2
?
Dear @matthewfeickert, given we are in the early stages of the review process, I think a rebase would not be too disruptive. I leave it up to the authors: if you prefer to publish the latest version with JOSS, please make sure the paper branch is up to date. Once the reviews are further under way, it may be best to keep the code frozen.
given we are in the early stages of the review process, I think a rebase would not be too disruptive. I leave it up to the authors: if you prefer to publish the latest version with JOSS, please make sure the paper branch is up to date. Once the reviews are further under way, it may be best to keep the code frozen.
Sounds good! I just rebased so I'll trigger wheadon now.
@whedon generate pdf from branch docs/add-JOSS-paper
Attempting PDF compilation from custom branch docs/add-JOSS-paper. Reticulating splines etc...
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Rebased to get v0.5.4
(detailed here) which is the same as v0.5.3
but with some safeguarding of optional dependencies changes.
@whedon generate pdf from branch docs/add-JOSS-paper
Attempting PDF compilation from custom branch docs/add-JOSS-paper. Reticulating splines etc...
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@eloisabentivegna Sorry for the delay. I've now started the review (downloaded & installed pyhf, running some tests etc.). I should be able to provide some comments by the end of the week.
Review:
pyhf is an incredibly useful tool, which is already an important part of the HEP analysis ecosystem. Installation is straightforward and there are plenty of examples. Before I can recommend publication in JOSS, I just have a few minor comments and suggestions, aimed mostly at making the code more accessible to newcomers.
[x] Documentation: Some brief examples/documentation are provided in the README. However, it appears that the most complete documentation is at https://scikit-hep.org/pyhf/, so it would be useful to add a link to this more prominently in the README. This would also lead new users more quickly to the many examples at https://scikit-hep.org/pyhf/examples.html.
[ ] Examples: The command-line example in the README - cat << EOF | tee likelihood.json | pyhf cls
- doesn't seem to work. Is there a missing likelihood.json
file?
[x] Tests: The examples in the README give the correct output. However, it appears that there are also automated tests in the repo. Is there a straightforward way for the user to run these test and verify that everyone is working fine?
[x] Paper: The citation syntax specifies that citations should be surrounded with square brackets and separated by semi-colons. Please could you update the paper?
[x] Paper: Typo in the paper - under "Future Work", " aims to provide support limit setting" should presumably be "aims to provide support for limit setting".
Hi @bradkav - thanks for reviewing. For the example in the README, you should use the full block of code to the last EOF
, that is (see below). Admittedly, this is probably not so obvious for users not familiar with all the features on the command line, so if you have suggestions on how to rephrase this, let us know!
For "Tests" - the straightforward way is through pytest
- and just running pytest
without any options should run all tests, if the user has checked out the development install of pyhf
. How would you like us to document the "straightforward" way or is it enough to mention that we use pytest
for all of our tests?
For the rest, we'll work on implementing the changes/fixes. Thanks again.
cat << EOF | tee likelihood.json | pyhf cls
{
"channels": [
{ "name": "singlechannel",
"samples": [
{ "name": "signal",
"data": [12.0, 11.0],
"modifiers": [ { "name": "mu", "type": "normfactor", "data": null} ]
},
{ "name": "background",
"data": [50.0, 52.0],
"modifiers": [ {"name": "uncorr_bkguncrt", "type": "shapesys", "data": [3.0, 7.0]} ]
}
]
}
],
"observations": [
{ "name": "singlechannel", "data": [51.0, 48.0] }
],
"measurements": [
{ "name": "Measurement", "config": {"poi": "mu", "parameters": []} }
],
"version": "1.0.0"
}
EOF
as in this screenshot:
@whedon generate pdf from branch docs/add-JOSS-paper
Attempting PDF compilation from custom branch docs/add-JOSS-paper. Reticulating splines etc...
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Thanks for the review @bradkav! We really appreciate it. :slightly_smiling_face: I've had whedon update the draft with the corrections (c.f. above comment).
- [ ] Documentation: Some brief examples/documentation are provided in the README. However, it appears that the most complete documentation is at https://scikit-hep.org/pyhf/, so it would be useful to add a link to this more prominently in the README. This would also lead new users more quickly to the many examples at https://scikit-hep.org/pyhf/examples.html.
There is currently a badge at the top of the README that links to the documentation:
seen also in this screenshot
The link is also found additionally in the "About" column on the GitHub page
We also encourage acquiring pyhf
from our officially supported distributions on PyPI and Conda-forge, where the links to the documentation are clearly visible on the pages thanks to the nice page design by the orgs.
Let us know if you have additional suggestions.
- [ ] Examples: The command-line example in the README -
cat << EOF | tee likelihood.json | pyhf cls
- doesn't seem to work. Is there a missinglikelihood.json
file?
As @kratsg has already mentioned, this example is using heredoc formatting, which is quite common in our target audience of users.
- [ ] Tests: The examples in the README give the correct output. However, it appears that there are also automated tests in the repo. Is there a straightforward way for the user to run these test and verify that everyone is working fine?
We do not distribute the test suite with the PyPI and Conda-forge distributions, as testing the software is the responsibility of the development team not the users (though we do acknowledge that libraries like NumPy and SciPy do distribute the tests in their releases). Distributing the tests would also require a large number of additional dependencies to run the full test suite (as the test suite compares execution across all optional externals and backends), which users should not be required to have for runtime use. Execution of the full test suite on every commit using GitHub Actions workflows is publicly viewable on GitHub, and the status is viewable through the live updating badges at the top of the README for the test suite, build of the documentation, and latest deployment to PyPI:
If a user wants to run the full test suite there are details in the developer docs page as well as in the CONTRIBUTING.md
.
- [ ] Paper: The citation syntax specifies that citations should be surrounded with square brackets and separated by semi-colons. Please could you update the paper?
Sure, I've changed them here. In pre-review @kthyng had suggested we remove the []
syntax. Both citations styles differ from what is common in high energy physics, so we're happy to adopt either of them. We had also missed the semicolons for separation — sorry about that — so that is fixed now too.
- [ ] Paper: Typo in the paper - under "Future Work", " aims to provide support limit setting" should presumably be "aims to provide support for limit setting".
Doh! Thanks for catching that. Fixed!
Thanks for the quick reply @matthewfeickert and @kratsg!
Concerning the documentation, I now appreciate that there are quite a few different links, so it was perhaps me not looking in the right places. I'm happy with this now.
Concerning the tests, I now see the section on tests in CONTRIBUTING.md
, so I'm happy with that. I agree that it's not necessary to distribute the test suite (especially with the live test status badges in the repo).
The final thing is the command line example. Thanks @kratsg for clear explanation. It wasn't a familiar format to me and I tried a few different combinations of things, but never the correct one. Doh. I realise that this format is common among the target audience, but in the interest of clarity, can I suggest splitting up the input and output in the example? E.g.:
$ cat << EOF | tee likelihood.json | pyhf cls
{
"channels": [
{ "name": "singlechannel",
"samples": [
{ "name": "signal",
"data": [12.0, 11.0],
"modifiers": [ { "name": "mu", "type": "normfactor", "data": null} ]
},
{ "name": "background",
"data": [50.0, 52.0],
"modifiers": [ {"name": "uncorr_bkguncrt", "type": "shapesys", "data": [3.0, 7.0]} ]
}
]
}
],
"observations": [
{ "name": "singlechannel", "data": [51.0, 48.0] }
],
"measurements": [
{ "name": "Measurement", "config": {"poi": "mu", "parameters": []} }
],
"version": "1.0.0"
}
EOF
which should produce the following JSON output:
{
"CLs_exp": [
0.0026062609501074576,
0.01382005356161206,
0.06445320535890459,
0.23525643861460702,
0.573036205919389
],
"CLs_obs": 0.05251497423736956
}
@suchitakulkarni, could you post an update on your review? Thanks!
@eloisabentivegna @bradkav Just popping in here to note that I'll get some follow up to @bradkav's last review — thanks for it too! — (and anything that @suchitakulkarni has) soon. The pyhf
dev team all took the winter holidays off entirely and we're now trying to fight through 2 weeks of email, but we'll reply soon.
Hi,
I was trying to complete my review today, however I can no longer access any of the documents. Was the invitation revoked? I am sorry to have been completely silent, I was trying to move to another city (my job is in Graz and am stuck in Vienna). I had to confirm and reconfirm everything multiple times due to ever changing lockdown rules in Austria.
I can still complete review if it is feasible for you.
I am very sorry once again!
best Suchita.
On Thu, 7 Jan 2021 at 22:42, Matthew Feickert notifications@github.com wrote:
@eloisabentivegna https://github.com/eloisabentivegna @bradkav https://github.com/bradkav Just popping in here to note that I'll get some follow up to @bradkav https://github.com/bradkav's last review — thanks for it too! — (and anything that @suchitakulkarni https://github.com/suchitakulkarni has) soon. The pyhf dev team all took the winter holidays off entirely and we're now trying to fight through 2 weeks of email, but we'll reply soon.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/2823#issuecomment-756403161, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABLGPKOA22FEZDWYK5AESCDSYYTFRANCNFSM4TN2E64A .
-- (Suchita Kulkarni)
P.S. If invited again, I will make sure to complete the review today.
On Tue, 12 Jan 2021 at 14:33, Suchita Kulkarni suchita.kulkarni@gmail.com wrote:
Hi,
I was trying to complete my review today, however I can no longer access any of the documents. Was the invitation revoked? I am sorry to have been completely silent, I was trying to move to another city (my job is in Graz and am stuck in Vienna). I had to confirm and reconfirm everything multiple times due to ever changing lockdown rules in Austria.
I can still complete review if it is feasible for you.
I am very sorry once again!
best Suchita.
On Thu, 7 Jan 2021 at 22:42, Matthew Feickert notifications@github.com wrote:
@eloisabentivegna https://github.com/eloisabentivegna @bradkav https://github.com/bradkav Just popping in here to note that I'll get some follow up to @bradkav https://github.com/bradkav's last review — thanks for it too! — (and anything that @suchitakulkarni https://github.com/suchitakulkarni has) soon. The pyhf dev team all took the winter holidays off entirely and we're now trying to fight through 2 weeks of email, but we'll reply soon.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/2823#issuecomment-756403161, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABLGPKOA22FEZDWYK5AESCDSYYTFRANCNFSM4TN2E64A .
-- (Suchita Kulkarni)
-- (Suchita Kulkarni)
@whedon re-invite @suchitakulkarni as reviewer
OK, the reviewer has been re-invited.
@suchitakulkarni please accept the invite by clicking this link: https://github.com/openjournals/joss-reviews/invitations
@suchitakulkarni, I have just reinvited you. Thanks for your help, and stay safe!
Thanks a lot! I will submit my review tomorrow. I already did check a few things out. Would still like to make few more checks and complete tasks in order.
On Tue, 12 Jan 2021 at 20:35, Eloisa Bentivegna notifications@github.com wrote:
@suchitakulkarni https://github.com/suchitakulkarni, I have just reinvited you. Thanks for your help, and stay safe!
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/2823#issuecomment-758887324, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABLGPKNNTPSBDZVW3QODM4LSZSP7HANCNFSM4TN2E64A .
-- (Suchita Kulkarni)
Hi,
Thanks for this incredibly useful and fast package. I have a few questions/comments.
1) In the example as given at the repository, should there be a plt.show() or plt.savefig() command in the end? The plot didn't show up at least for me otherwise. It may also be very nice to provide that example as a python file such that user can run it and get 'instant satisfaction' of seeing an output :) 2) I could not run pytest on my laptop. It fails with an ugly trace and the error seems to be 'E TypeError: function() got an unexpected keyword argument 'jit_compile' 3) Does the code has to have tensorflow/pytorch installed? If so, should it be mentioned as necessary packages? I didn't have them installed but it seems some of the tests use it?
Thanks for the quick reply @matthewfeickert and @kratsg!
Concerning the documentation, I now appreciate that there are quite a few different links, so it was perhaps me not looking in the right places. I'm happy with this now.
Concerning the tests, I now see the section on tests in
CONTRIBUTING.md
, so I'm happy with that. I agree that it's not necessary to distribute the test suite (especially with the live test status badges in the repo).The final thing is the command line example. Thanks @kratsg for clear explanation. It wasn't a familiar format to me and I tried a few different combinations of things, but never the correct one. Doh. I realise that this format is common among the target audience, but in the interest of clarity, can I suggest splitting up the input and output in the example? E.g.:
$ cat << EOF | tee likelihood.json | pyhf cls { "channels": [ { "name": "singlechannel", "samples": [ { "name": "signal", "data": [12.0, 11.0], "modifiers": [ { "name": "mu", "type": "normfactor", "data": null} ] }, { "name": "background", "data": [50.0, 52.0], "modifiers": [ {"name": "uncorr_bkguncrt", "type": "shapesys", "data": [3.0, 7.0]} ] } ] } ], "observations": [ { "name": "singlechannel", "data": [51.0, 48.0] } ], "measurements": [ { "name": "Measurement", "config": {"poi": "mu", "parameters": []} } ], "version": "1.0.0" } EOF
which should produce the following JSON output:
{ "CLs_exp": [ 0.0026062609501074576, 0.01382005356161206, 0.06445320535890459, 0.23525643861460702, 0.573036205919389 ], "CLs_obs": 0.05251497423736956 }
Thanks for the additional comments and information @bradkav, as well as your patience in us getting back to you after the holidays. We've gone ahead and followed your suggestion and split out the output into a separate block in the README
.
Thanks @matthewfeickert! I'm happy with all the changes.
So @eloisabentivegna I'm now happy to recommend this for publication in JOSS :fireworks:
Thanks for the review @suchitakulkarni! We've updated the README
to reflect your feedback.
- In the example as given at the repository, should there be a plt.show() or plt.savefig() command in the end? The plot didn't show up at least for me otherwise. It may also be very nice to provide that example as a python file such that user can run it and get 'instant satisfaction' of seeing an output :)
It is uncommon to give explicit figure save instructions in most libraries that build ontop of existing visualization tools, however, given your suggestions we've adopted matplotlib
's approach in their documentation of including an explicit fig.show()
in the two visualization examples in the README
. I'm not sure what you mean by "provide that example as a Python file" though. How do you envision this file being distributed to a user? The distributions of pyhf
on PyPI and Conda-forge are distributions of the runtime, not of the entire repository. The examples in the README
are runnable code that can be copied into any file or REPL and are shortened versions of longer examples taken from the documentation.
- I could not run pytest on my laptop. It fails with an ugly trace and the error seems to be 'E TypeError: function() got an unexpected keyword argument 'jit_compile'
As we mentioned in our reply to @bradkav's first round of comments, similar to how we do not distribute the tests with distributions of pyhf
on PyPI or Conda-forge, to run the test suite developers should install all necessary dependencies as detailed in the developer docs page as well as in the CONTRIBUTING.md
.
- Does the code has to have tensorflow/pytorch installed? If so, should it be mentioned as necessary packages? I didn't have them installed but it seems some of the tests use it?
No, they are not required for user runtime as they are optional backends as mentioned in the installation documentation. They are of course required for the tests.
@suchitakulkarni @eloisabentivegna I just wanted to check in again and see if there was any further comments or questions on our responses.
Hi,
apologies, I think am good! Thanks for making the changes and I agree that it will be non-trivial to provide python file as an example with the distribution.
Sorry for delay again!
I'm assuming that this was closed in error
@danielskatz i think it was closed by accident. I'm a bit confused on next steps. It seems both reviewers have given the 👍 . Does this go to @eloisabentivegna ?
@eloisabentivegna is the editor and is in charge of the process at this point.
Also, there are still some items left for @suchitakulkarni to check.
Also, there are still some items left for @suchitakulkarni to check.
I think both of those unchecked boxes in the Documentation section of the Review checklist were addressed in the most recent set of comments, so I think they just need to be filled in now, if everyone is happy with the state of our responses.
Okay, filled! Sorry, I am turning out to be a very messy reviewer indeed. Thank you for reminding me.
@bradkav, @suchitakulkarni, many thanks for your help and comments. Based on your recommendation, I will start the prepublication phase for this submission.
@whedon generate pdf from branch docs/add-JOSS-paper
Attempting PDF compilation from custom branch docs/add-JOSS-paper. Reticulating splines etc...
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Submitting author: @matthewfeickert (Matthew Feickert) Repository: https://github.com/scikit-hep/pyhf Version: v0.5.4 Editor: @eloisabentivegna Reviewer: @suchitakulkarni, @bradkav Archive: 10.5281/zenodo.4318533
:warning: JOSS reduced service mode :warning:
Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@suchitakulkarni & @bradkav, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @eloisabentivegna know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Review checklist for @suchitakulkarni
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
Review checklist for @bradkav
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper