Closed whedon closed 3 years ago
Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @sadielbartholomew, @ashiklom it looks like you're currently assigned to review this paper :tada:.
:warning: JOSS reduced service mode :warning:
Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.
:star: Important :star:
If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿
To fix this do the following two things:
For a list of things I can do to help you, just type:
@whedon commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@whedon generate pdf
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1175/JCLI-D-12-00195.1 is OK
- 10.1175/JCLI-D-12-00196.1 is OK
- 10.1175/2009JCLI3466.1 is OK
- 10.1002/2015GL064240 is OK
- 10.5194/gmd-11-2273-2018 is OK
- 10.5194/gmd-2019-375 is OK
MISSING DOIs
- 10.1007/s10584-011-0156-z may be a valid DOI for title: The RCP greenhouse gas concentrations and their extensions from 1765 to 2300
- 10.1007/s00382-019-04686-4 may be a valid DOI for title: On simple representations of the climate response to external radiative forcing
INVALID DOIs
- None
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
:wave: @sadielbartholomew, @ashiklom, @znicholls this is the review thread for the paper. All of our communications will happen here from now on.
Both reviewers have checklists at the top of this thread with the JOSS requirements. As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines.
The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention openjournals/joss-reviews#2766
so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.
We aim for reviews to be completed within about 4 weeks. Please let me know if any of you require some more time. We can also use @whedon (our bot) to set automatic reminders if you know you'll be away for a known period of time.
Please feel free to ping me here (@leouieda) or email me privately if you have any questions/concerns.
pip
from the Python Package Index (PyPI) and from source, as described in the instructions.@ashiklom thank you for your thorough review (and issues on the software repository)!
@znicholls please do you best to address some of the points raised, in particular:
Please keep us posted on your progress 🙂
@znicholls please do you best to address some of the points raised, in particular
Will do. I've started in a couple of PRs, will work on point 2 this week. Thanks to both for the great review and co-ordination!
@whedon commands
Here are some things you can ask me to do:
# List Whedon's capabilities
@whedon commands
# List of editor GitHub usernames
@whedon list editors
# List of reviewers together with programming language preferences and domain expertise
@whedon list reviewers
EDITORIAL TASKS
# Compile the paper
@whedon generate pdf
# Compile the paper from alternative branch
@whedon generate pdf from branch custom-branch-name
# Ask Whedon to check the references for missing DOIs
@whedon check references
# Ask Whedon to check repository statistics for the submitted software
@whedon check repository
I've updated the missing DOIs (https://github.com/openscm/openscm-twolayermodel/pull/24) so will recompile
@whedon check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1175/JCLI-D-12-00195.1 is OK
- 10.1175/JCLI-D-12-00196.1 is OK
- 10.1175/2009JCLI3466.1 is OK
- 10.1002/2015GL064240 is OK
- 10.1007/s10584-011-0156-z is OK
- 10.1007/s00382-019-04686-4 is OK
- 10.5194/gmd-11-2273-2018 is OK
- 10.5194/gmd-2019-375 is OK
MISSING DOIs
- None
INVALID DOIs
- None
@whedon generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
- For instance, were previous implementation of the two-layer model in something less user-friendly like Fortran or less open like Matlab?
On this one I'm not sure. There's FaIR, but it is missing some of the conversion functionality and modularisation we've implemented here (as mentioned in the draft). I haven't been able to find any other public implementations so couldn't comment on other languages unfortunately.
Text updates based on @ashiklom's review will go in https://github.com/openscm/openscm-twolayermodel/pull/25
@ashiklom I've added a table with openly accessible models from RCMIP (so just FaIR, GREB and Hector) to openscm/openscm-twolayermodel#25. Does that work or would extra text and explanation be useful?
For instance, were previous implementation of the two-layer model in something less user-friendly like Fortran or less open like Matlab?
On this one I'm not sure. There's FaIR, but it is missing some of the conversion functionality and modularisation we've implemented here (as mentioned in the draft). I haven't been able to find any other public implementations so couldn't comment on other languages unfortunately.
Are there any openly available implementations? Maybe highlight that more, if that's the case?
Are there any openly available implementations?
Not as far as I'm aware.
@ashiklom I've added a table with openly accessible models from RCMIP (so just FaIR, GREB and Hector) to openscm/openscm-twolayermodel#25. Does that work or would extra text and explanation be useful?
Maybe one could reference models that are easily driven in forcing mode? Oscar, WASP and Pymagicc/MAGICC are also openly available on GitHub, but then one slowly ends up referencing the entire table from the RCMIP paper ...
I think the table you added combined with the references to the RCMIP paper throughout the documentation are sufficient to address this (you might consider making it extra clear that the RCMIP paper is the more comprehensive list -- e.g., "For a more extensive list of simple climate models, see Nicholls et al... (link)".
Oscar, WASP and Pymagicc/MAGICC
I've added these now
Maybe one could reference models that are easily driven in forcing mode?
I could do this, that would mean I'd remove GREB. Seems fine to me as it's also kind of outside the typical idea of a simple climate model.
Maybe one could reference models that are easily driven in forcing mode?
I could do this, that would mean I'd remove GREB. Seems fine to me as it's also kind of outside the typical idea of a simple climate model.
There is probably no clear distinction, so i'd keep it. As it's also meant for teaching (i think) it would be a good pointer!
@whedon generate pdf from branch joss-review-1
Attempting PDF compilation from custom branch joss-review-1. Reticulating splines etc...
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Hi, I can start my review in earnest shortly, so can I just check that the state of the repository and of the paper has stabilised now in terms of changes made resulting from feedback from the first review, @znicholls ?
It's been a few days since any activity (at least as indicated on this thread) so it seems likely, but think it best to check to avoid starting the review and have some change go in that might influence the judgement I can make.
so can I just check that the state of the repository and of the paper has stabilised now in terms of changes made resulting from feedback from the first review, @znicholls ?
Yep I've just merged everything into master and will get whedon to recompile now. Thanks for checking and reviewing!
@whedon generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
:wave: Hi @sadielbartholomew, @ashiklom just checking in to see how this is going. It's been a while since there was any activity here.
@ashiklom I noticed that your checklist is nearly finished. Could you please go over once more and see if the recent changes meet all of the requirements?
Hi @leouieda, thanks for checking in. Sorry for the delay, which has arisen because I've been prioritising another JOSS review which began at an earlier date that this one. But I have largely completed that now, so I can focus on this review. I'll aim to complete it by the end of the week. Thanks to all for your patience.
Yes, recent changes look good -- thanks! My checklist is now complete.
@ashiklom thank you for the review 👍🏽
@sadielbartholomew I appreciate all your hard work for JOSS 🙂 Let me know if you need more time or have any questions/concerns.
@whedon check repository
Software report (experimental):
github.com/AlDanial/cloc v 1.88 T=0.68 s (80.0 files/s, 13842.1 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Python 24 978 1354 3261
reStructuredText 13 153 89 221
TeX 1 24 0 201
Jupyter Notebook 4 0 2380 179
YAML 5 17 9 164
make 2 29 10 113
Markdown 4 35 0 94
DOS Batch 1 8 1 26
-------------------------------------------------------------------------------
SUM: 54 1244 3843 4259
-------------------------------------------------------------------------------
Statistical information for the repository 'c28f6140f49c9c47eb4b0159' was
gathered on 2020/12/10.
The following historical commit information, by author, was found:
Author Commits Insertions Deletions % of changes
Chris Smith 2 9 3 0.16
Robert Gieseke 2 43 31 0.96
Zebedee Nicholls 62 6588 1013 98.88
Below are the number of rows from each author that have survived and are still
intact in the current revision:
Author Rows Stability Age % in comments
Chris Smith 8 88.9 0.3 0.00
Robert Gieseke 29 67.4 5.7 0.00
Zebedee Nicholls 5556 84.3 2.8 10.82
Hi @leouieda @znicholls, I started my review however before proceeding with the remainder I feel obliged to request a more experienced opinion regarding the 'substantial scholarly effort' criterion because, while it is clear more than sufficient academic effort has been conducted related to the context of this package, I am not convinced the package itself meets the minimum threshold of three months work so, as per the JOSS review guidelines, I need to flag this so an informed decision can be made by the editors in this respect.
I say this tentatively, because it is difficult to grasp the effort put into a package. I am aware the model is simple by design and the fact it has been used and cited in the RCMIP paper indicate it is important to the research area and indeed very citable. My concerns stem largely from the line counts of the functional Python code, excluding tests and configuration scripts which seems to be ~1500 lines:
$ pwd
/home/sadie/openscm-twolayermodel/src/openscm_twolayermodel
$ find . -name '*.py' | xargs wc -l
314 ./base.py
12 ./__init__.py
15 ./errors.py
10 ./constants.py
321 ./impulse_response_model.py
520 ./_version.py
355 ./two_layer_model.py
37 ./utils.py
1584 total
which is low enough to plausibly have taken less than three months in total considering even the related research time for the idea, design, implementation planning and incremental changes, etc. The commit history indicates it has been worked on at least on GitHub for about two-thirds of a year, but the total work time can't be inferred from that.
@leouieda please can you provide the editorial opinion on whether I can consider this criterion met. If so I will be very happy to proceed with the remainder of the review.Thanks.
For context the comments in the pre-review:
mine: https://github.com/openjournals/joss-reviews/issues/2728#issuecomment-705214797 @leouieda's response at the time: https://github.com/openjournals/joss-reviews/issues/2728#issuecomment-707132474
@sadielbartholomew thank you for bringing this up. As @znicholls pointed out, we had discussed this in the prereview since the line count is on the small side. However, I'm not a climate researcher which is why we decided to get the opinion of reviewers as well. As you mentioned, this is evidently a useful piece of software given that it's already been cited. From my side, I would be happy to treat this as in scope if you agree. The "three months of effort" is more of a rule of thumb than a hard requirement. But as I said, I'm not an expert in this field. Please also consider the amount of effort that went into testing and documentation.
So disregarding the "3-month of effort" concept, do you think this qualifies as a "substantial scholarly effort"?
My 2 cents: I think this is comfortably within scope. We as an Earth science community need more user-friendly modeling tools like this, and more citable papers describing those tools.
I'm obiously biased (as i'm part of the OpenSCM projects) but this tool is also using the scmdata
and openscm-units
projects by the same authors which help making the code in this project look succinct by moving boilerplate code to re-usable modules..
Thanks all for the comments RE the 'substantial scholarly effort' criterion.
However, I'm not a climate researcher which is why we decided to get the opinion of reviewers as well.
I work as an effective research software engineer in the field of climate research, however that's quite different to being a climate researcher, and I am definitely not an "expert" in the field either! So that's an important caveat to note. If you require the opinion of someone who fits that kind of description, I do have numerous colleagues who would fit that description, some of whom I might be able to seek the opinion of (should that be permitted relating to COI rules).
However, if my own opinion is sufficient, based on the comments here, in particular that our first reviewer @ashiklom regards it as "comfortably within scope", that my answer to the following is 'yes, absolutely':
So disregarding the "3-month of effort" concept, do you think this qualifies as a "substantial scholarly effort"?
and the good point by @rgieseke that:
this tool is also using the scmdata and openscm-units projects by the same authors which help making the code in this project look succinct by moving boilerplate code to re-usable modules
where fewer lines of code is a mark of quality in my eyes, I am more than happy to consider OpenSCM Two Layer Model as being 'substantial scholarly effort'.
Are we okay to take my own judgement (given the above caveat) @leouieda? (If so, I will tick that item and I will finish my review soon.) If not, would it help for me to check in with a colleague that is an experienced climate researcher (who can self-certify as having no COI) for their view?
As a side note, I do generally agree with the sentiments such as:
We as an Earth science community need more user-friendly modeling tools like this, and more citable papers describing those tools.
but I am trying to be as objective as possible and judge purely on the review criteria.
@sadielbartholomew thank you for being so thorough and trying to look at this from a different perspective. I'm happy to take your own judgement on this since @ashiklom agrees. Please let me know when you're done and all checks have been marked in the review issue.
@znicholls thank you for your patience since this review has taken longer than usual, in no small part because of me. After reviews are done, there are some editorial procedures to go through but they should be fairly quick.
Happy New Year to everyone! 🍾
Thanks @leouieda in that case I will proceed ASAP and shall aim to finish the review by the end of this week.
I'll have this completed and report back in the next few days, thanks all for your patience.
Hi all, no real rush from my side but just checking in to see if there is any timeline for resolving this?
Hi, I am really sorry @znicholls I was ill after my previous comment and then this fell off my radar when I was back catching up on things.
I'll resume my review later today and finish it, reporting back on this thread with any feedback, sometime this weekend and certainly by the end of the weekend in UTC time. My apologies and thanks for your patience.
Hi @sadielbartholomew, thanks for the update and please look after your health first. There isn't a rush on this just good to know it hasn't disappeared completely.
I am very satisfied with both the software and the paper, notably that they meet the requirements. Therefore I have now ticked all review list checkboxes and I am happy to recommend acceptance of the OpenSCM Two Layer Model paper for publication.
Please do consider the minor comments I raise below @znicholls, though they are merely suggestions or questions that came to me during review and whether or not any changes are made in response to them is up to you, it does not influence my decision to suggest approving the paper to @leouieda.
The most important of these comments are probably those regarding the documentation of, and output of, the the automated tests (I'm about to open some minor PRs regarding these items).
Thanks to all for awaiting my review.
Note: for impartiality I have not re-read through the comments of the first reviewer before my review, and hence not referenced them here. Therefore I may have covered points that have already been raised. If they have been addressed satisfactorily already, there is no need for further action.
Organised in correspondence to the review checklist (no mention of a given check box item means I don't have any comments to make about it, other than that I am satisfied it is met by the software and/or paper):
'Functionality': there are a few potential issues raised by some failures and warnings from a run of the tests: see the 'Documentation' -> 'Automated tests' section below for details.
['Performance': just to note that I found no performance claims, rather than having found some and confirmed they are met.]
'Installation instructions': installation is handled seamlessly with both pip
and conda
, and the corresponding package dependencies are enumerated in the setup.py
, though I think for potential users it would be useful to additionally add, or process in, the list of the user-facing (i.e. install rather than development) dependencies to the 'Installation' section of the docs, in case pontential users who are not sure if they want to try out the software want to know the requirements, without having to search for them (or some similar scenario).
'Automated tests': though there clearly are tests living under the tests
repo, I struggled to find information about the standard ways to run them either in the docs or in the README. After searching and effort than is ideal, I eventually found "Make sure the tests pass by running make check" under 'Getting setup' but that didn't work (gave make: *** No rule to make target 'check'. Stop.
) and only by inspecting the Makefile
did I see that the command should in fact be make checks
, so finally I managed to run the tests.
I suggest adding an explicit instruction about running the tests via this command when in the root of the directory (or otherwise) to the 'installation' page. It would be helpful to add it to the repo README as well under a section called 'Tests' or similar.
Moreover when I ran the checks on the current master
branch there were 4 failures and 2 signficant warnings, so it looks like there are some self-contained issue to be resolved.
I am about to open a few PRs regarding these (I will link them to this Issue).
'Community guidelines': the guidelines are clear, with (1) covered particularly thoroughly, though I wonder whether guidance for (2) and (3) might be somewhat difficult to find in the case of a user looking for support, because the guidelines appear to sit under the 'Development' section of the documentation (unless I have missed some further guidance given elsewhere in the docs?). Ideally, an indication of how to get help could be provided outside of the parts of the docs which might be considered to apply to developers only.
'Summary': very good, clear description. However, I did note that the software was described as ' object-oriented' and thought that the paper could be enhanced with a little description (or perhaps a schematic) of the design of OpenSCM Two Layer Model in terms of objects and their relationships, e.g. the use of the model as an abstract base class, a two-layer variant and model sub-classing from it, etc.
'State of the field': several other commonly-used packages are mentioned and described, though it appears that only the FaIR model is explicitly compared to the OpenSCM Two Layer Model in the text. Ideally the other models in the table could be briefly compared too, for the benefit of non-specialist readers who would not know how the brief description of the approaches, also in the table, differentiate between the simple open models.
'Quality of writing': very well written, and there is nothing I can see that should be edited in any way, except a few formatting aspects that in my opinion could make the table clearer:
'References': the paper is well-referenced and the format of the references is sound, but I did notice there are two sets of papers cited which have the same authors and publication date, namely "Geoffroy, Saint-Martin, Bellon, et al., 2013" and "M. Meinshausen et al., 2011". Ideally these could be distinguished in the references in the text, somehow, else someone reading would not be able to tell which of the two was being referenced, especially if reading a printed copy. Perhaps "M. Meinshausen et al., 2011a" and "...200b" as suggested for a similar style here?
There doesn't seem to be a way I can see referenced from a quick search on how to do this with the JOSS standard citation syntax, CSL. @leouieda do you know how it could be done?
👋 @leouieda - what's the status of this submission? It looks done, or close to it?
Submitting author: @znicholls (Zebedee Nicholls) Repository: https://github.com/openscm/openscm-twolayermodel Version: v0.2.3 Editor: @leouieda Reviewer: @sadielbartholomew, @ashiklom Archive: 10.5281/zenodo.4950772
:warning: JOSS reduced service mode :warning:
Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@sadielbartholomew & @ashiklom, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @leouieda know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Review checklist for @sadielbartholomew
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
Review checklist for @ashiklom
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper