Closed editorialbot closed 1 year ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.88 T=0.04 s (536.5 files/s, 159061.4 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Python 7 460 918 1634
Markdown 3 31 0 436
HTML 8 50 70 252
TeX 1 0 0 76
Jupyter Notebook 1 0 3095 64
YAML 1 1 4 18
XML 3 0 0 6
-------------------------------------------------------------------------------
SUM: 24 542 4087 2486
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Wordcount for paper.md
is 561
Failed to discover a Statement of need
section in paper
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1016/S0309-1708(02)00092-1 is OK
MISSING DOIs
- 10.1111/gwat.12360 may be a valid DOI for title: Calibration and uncertainty analysis for complex environmental models
- 10.25080/majora-92bf1922-00a may be a valid DOI for title: Data structures for statistical computing in python
INVALID DOIs
- http://dx.doi.org/10.5066/F7RF5S7G is INVALID because of 'https://doi.org/' prefix
- https://doi.org/10.1016/j.envsoft.2021.105022 is INVALID because of 'https://doi.org/' prefix
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot commands
Hello @ecomodeller, here are the things you can ask me to do:
# List all available commands
@editorialbot commands
# Get a list of all editors's GitHub handles
@editorialbot list editors
# Check the references of the paper for missing DOIs
@editorialbot check references
# Perform checks on the repository
@editorialbot check repository
# Adds a checklist for the reviewer using this command
@editorialbot generate my checklist
# Set a value for branch
@editorialbot set joss-paper as branch
# Generates the pdf paper
@editorialbot generate pdf
# Generates a LaTeX preprint file
@editorialbot generate preprint
# Get a link to the complete list of reviewers
@editorialbot list reviewers
I was not able to confirm the functional claims of the software tenemark/mikecalsetup#2; tenemark/mikecalsetup#3; tenemark/mikecalsetup#5.
The readme has a statement of need, but it could be improved: tenemark/mikecalsetup#4
The package needs a list of dependencies, ideally in the form of a conda environment file: tenemark/mikecalsetup#1
While the package includes a minimal working example, it could ideally be improved to be a complete working example that included execution of a forward model run with PEST or OSTRICH tenemark/mikecalsetup#5.
Most functions and classes are documented with numpy-style docstrings, but I would highly encourage the author to add HTML documentation, which can be easily set up with the Scientific Python Cookiecutter or perhaps other tools tenemark/mikecalsetup#7.
While there is one test file, it didn't execute successfully, and there is no test coverage reporting or CI tenemark/mikecalsetup#3, tenemark/mikecalsetup#7.
The documentation does not have community guidelines tenemark/mikecalsetup#8
State of the field: While the paper mentions the PstFrom
module in pyEMU, the comparison is not entirely correct. While PstFrom
was initially developed (and possibly most often used) for MODFLOW models, it is now model-agnostic in the sense that it accepts text files containing numerical arrays or tabular input (as can be read by numpy.loadtxt()
and pandas.read_csv()
, for example). PstFrom
and more generally pyEMU
has benefited from substantial development effort by some of the top scientists in the parameter estimation/uncertainty quantification space, and offer relatively easy, mostly automated hooks into many cutting edge techniques such as the high-dimensional parameterization, the iterative ensemble smoother, multi-objective optimization under uncertainty, etc. As a result it is widely used, and has good prospects for continued development.
Which I guess leads to a larger question of the focus of mikecalsetup. It clearly seems like there is a need to make parameter estimation easier for MIKE SHE models. Is there a way to do this that would also allow MIKE SHE users to also access the functionality in pyEMU, and reduce code duplication for more mundane tasks like managing the PEST control file or providing a python API for various PEST or PEST++ features?
Quality of writing: the paper has some mostly minor grammatical issues, and could benefit from an edit by a native English speaker.
It looks like there are some issues with the reference DOIs, as flagged by @editorialbot. pyEMU should probably be cited:
White, J.T., Fienen, M.N., Doherty, J.E., 2016. A python framework for environmental model uncertainty analysis. Environmental modeling and Software 85, 217–228. https://doi.org/10.1016/j.envsoft.2016.08.017.
Thank you to both reviewers for their careful and thorough work!
@tenemark please let myself or the reviewers know if you need clarification on any of the points they've raised.
Hi, I understand the comments and will work on improving the package and manuscript accordingly. Thank you for the helpful suggestions. I will get back to you in a couple of weeks.
On Sat, Jul 8, 2023 at 4:00 AM Jemma Stachelek @.***> wrote:
Thank you to both reviewers for their careful and thorough work!
@tenemark https://github.com/tenemark please let myself or the reviewers know if you need clarification on any of the points they've raised.
— Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/5548#issuecomment-1626450907, or unsubscribe https://github.com/notifications/unsubscribe-auth/A35QM7XXUV2QOIGB62REEZ3XPC5NHANCNFSM6AAAAAAZD6PV6M . You are receiving this because you were mentioned.Message ID: @.***>
Thank you again very much for your thorough review! Following your comments, I have made some revisions, which I will summarize here: I have addressed the raised issues on github. I have improved testing of the package with additional tests and coverage reporting. I have added a conda environment file, which will ease installation of the package. By installing mikecalsetup in this environment, the test file will run successfully. I have added some extra examples and moved them to the folder "examples". The examples now also include execution of a forward model run with PEST and OSTRICH. Also, an example (example_PEST_with_PstFrom.ipynb) shows how users of MIKE SHE can access the functionality of pyEMU and mikecalsetup at the same time. I have changed the state of the field description to better convey how this package relate to pyEMU. The focus of mikecalsetup is to setup calibration files for the nested header text file format used in MIKE SHE files, which is not supported by pyEMU. Spatially distributed and time dependent data found in the dfs file format of MIKE SHE can be handled with pyEMU if they are converted to text files (which we now show in the example “example_PEST_with_PstFrom.ipynb”). mikecalsetup has therefore not been developed to handle these files. I have edited the reference list and validated the DOIs.
Sounds great @tenemark! I'll have a look over the next couple days.
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1016/S0309-1708(02)00092-1 is OK
- 10.1016/j.envsoft.2021.105022 is OK
- 10.1016/j.envsoft.2016.08.017 is OK
MISSING DOIs
- 10.1111/gwat.12360 may be a valid DOI for title: Calibration and uncertainty analysis for complex environmental models
- 10.25080/majora-92bf1922-00a may be a valid DOI for title: Data structures for statistical computing in python
INVALID DOIs
- None
The updated changes look great to me.
@aleaf or @ecomodeller can either of you verify that the installation file and tests work?
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1016/S0309-1708(02)00092-1 is OK
- 10.1016/j.envsoft.2021.105022 is OK
- 10.1016/j.envsoft.2016.08.017 is OK
MISSING DOIs
- 10.1111/gwat.12360 may be a valid DOI for title: Calibration and uncertainty analysis for complex environmental models
- 10.25080/majora-92bf1922-00a may be a valid DOI for title: Data structures for statistical computing in python
INVALID DOIs
- None
I have verified that the installation file and tests work. As an aside, the coverage
reporting didn't work for me but that's not critical.
From my perspective, this paper is ready to move on to the next steps of post-review, acceptance, etc. @aleaf, @ecomodeller do either of you feel differently?
No, please proceed.
yes, please proceed.
On Wed, Oct 4, 2023 at 11:56 PM Henrik Andersson @.***> wrote:
No, please proceed.
— Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/5548#issuecomment-1748032001, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA553O4X35SDPL2HHU622O3X5Y4ZJAVCNFSM6AAAAAAZD6PV6OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONBYGAZTEMBQGE . You are receiving this because you were mentioned.Message ID: @.***>
@editorialbot set <DOI here> as archive
@editorialbot set <version here> as version
@editorialbot generate pdf
@editorialbot check references
and ask author(s) to update as needed@editorialbot recommend-accept
@tenemark The next step is for you to create a GitHub/Zenodo release: https://github.com/tenemark/mikecalsetup/releases
Thank you! I have made a release.
On Thu, Oct 5, 2023 at 5:49 PM Jemma Stachelek @.***> wrote:
@tenemark https://github.com/tenemark The next step is for you to create a GitHub/Zenodo release: https://github.com/tenemark/mikecalsetup/releases
— Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/5548#issuecomment-1749174256, or unsubscribe https://github.com/notifications/unsubscribe-auth/A35QM7QATESUCVUHBYPNNR3X53JI5AVCNFSM6AAAAAAZD6PV6OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONBZGE3TIMRVGY . You are receiving this because you were mentioned.Message ID: @.***>
@editorialbot set v0.1 as version
Done! version is now v0.1
Great, next step @tenemark is making a DOI archive in Zenodo/Figshare/Etc. One critical piece of this step is that the archival deposit must have the correct metadata. This includes the title (should match the paper title) and author list (make sure the list is correct and people who only made a small fix are not on it). You may also add the authors' ORCID.
I am unsure how to do this. I have changed the metadata of my Zenodo upload to correspond to what you requested. The DOI is 10.5281/zenodo.8412800. If this is not what you meant, could you please provide me with a link to a guideline of what to do.
On Fri, Oct 6, 2023 at 2:51 PM Jemma Stachelek @.***> wrote:
Great, next step @tenemark https://github.com/tenemark is making a DOI archive in Zenodo/Figshare/Etc. One critical piece of this step is that the archival deposit must have the correct metadata. This includes the title (should match the paper title) and author list (make sure the list is correct and people who only made a small fix are not on it). You may also add the authors' ORCID.
— Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/5548#issuecomment-1750623612, or unsubscribe https://github.com/notifications/unsubscribe-auth/A35QM7UHJEMYS24SIQYXKR3X575F7AVCNFSM6AAAAAAZD6PV6OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONJQGYZDGNRRGI . You are receiving this because you were mentioned.Message ID: @.***>
@editorialbot set 10.5281/zenodo.8412800 as archive
Done! archive is now 10.5281/zenodo.8412800
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot recommend-accept
Attempting dry run of processing paper acceptance...
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1016/S0309-1708(02)00092-1 is OK
- 10.1016/j.envsoft.2021.105022 is OK
- 10.1016/j.envsoft.2016.08.017 is OK
MISSING DOIs
- 10.1111/gwat.12360 may be a valid DOI for title: Calibration and uncertainty analysis for complex environmental models
- 10.25080/majora-92bf1922-00a may be a valid DOI for title: Data structures for statistical computing in python
INVALID DOIs
- None
Submitting author: !--author-handle-->@tenemark<!--end-author-handle-- (Trine Enemark) Repository: https://github.com/tenemark/mikecalsetup Branch with paper.md (empty if default branch): Version: v0.1 Editor: !--editor-->@jsta<!--end-editor-- Reviewers: @aleaf, @ecomodeller Archive: 10.5281/zenodo.8412800
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@aleaf & @ecomodeller, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @jsta know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @ecomodeller
📝 Checklist for @aleaf