Closed editorialbot closed 6 months ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.88 T=0.05 s (870.4 files/s, 95725.2 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Python 12 295 518 1216
Markdown 14 194 0 453
Rust 3 30 139 416
YAML 6 25 13 237
Jupyter Notebook 4 0 784 189
TeX 1 11 0 136
TOML 2 16 4 119
HTML 1 0 0 28
Dockerfile 1 3 1 12
-------------------------------------------------------------------------------
SUM: 44 574 1459 2806
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Wordcount for paper.md
is 521
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.2118/83381-PA is OK
- 10.3390/en11123368 is OK
- 10.1016/j.petrol.2010.05.006 is OK
- 10.1109/MCSE.2011.37 is OK
- 10.1038/s41586-020-2649-2 is OK
- 10.25080/Majora-92bf1922-00a is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.1016/j.egypro.2013.06.290 is OK
- 10.2118/51793-PA is OK
- 10.2118/95322-PA is OK
MISSING DOIs
- None
INVALID DOIs
- None
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Hi @mgcooper & @amandersillinois, thank you both again for agreeing to review for JOSS. This is the review issue where we track the peer-review process. Please refer to the first post in this issue, as it does a pretty good job outlining the JOSS review procedure; we do "checklist-driven" reviews, and there is a command in the first post for you to use to generate your own reviewer checklist.
At this time we are asking reviewers to please complete their first set of reviews within 6 weeks; this is flexible, really what I and I'm sure @frank1010111 would appreciate is if you just comment here with roughly when you expect to get to your review and/or if you anticipate a delay in reviewing this submission. @mgcooper I know you'd indicated you are busy after Feb. 8 which is great for us to know.
Please do not hesistate to reach out to me with any questions you might have about the review process. I will be setting up reminders for the bot to ping us all in 3 weeks just to help make sure this review doesn't drop off of anyone's to-do list.
Thanks all, Jay
@editorialbot remind @mgcooper in three weeks
Reminder set for @mgcooper in three weeks
@editorialbot remind @amandersillinois in three weeks
Reminder set for @amandersillinois in three weeks
Hi all, just wanted to check-in and ask @mgcooper and @amandersillinois if y'all have any questions about the JOSS review process. Please reach out and ask if so. Thanks!
:wave: @mgcooper, please update us on how your review is going (this is an automated reminder).
:wave: @amandersillinois, please update us on how your review is going (this is an automated reminder).
Hi all, just wanted to check-in and ask @mgcooper and @amandersillinois how are we doing on the reviews? Reach out if you've got any questions about the process or need any kind of extension on your review. Right now we're about 2 weeks from when we'd like the reviews to be complete.
Thanks!
I have successfully installed pywaterflood and can run 7-minutes-to-pywaterflood.ipynb, choosing-crm.ipynb, and multiwell-productivity-index.ipynb. However, I can't run /buckley-leverett.ipynb. I opened an issue on pywaterflood
I've done my best to try to follow the guidelines for review but I do have a few questions and some uncertainty about some of the checklist items.
I'm neither a specialist in well connectivity/groundwater nor a particularly adept programmer -- I'm just a user of numerical models. So I don't believe I'm qualified/able to comment two some things. Specifically: on Functionality-Performance - the article claims that the performance is better than an Excel workbook - but I did not test this claim and don't think I could do so in the time I can devote to this review. on Documentation -Functionality documentation - I don't know what API method documentation is and cannot comment.
Broadly, on Documentation -- I think that it checks the boxes (other than the question above), but I was sort-of expecting a walk-through of the different options outside of the annotation of the code itself. For example, CRM takes arguments of tau_selection and constraints and the options (and rational for picking one or the other) are not described in detail - although there is advice about what to pick at the top of crm.py. There is no "user manual" per say - although I appreciate the python notebook examples and I do believe that if I wanted to use this for a specific problem, I would be able to at least get started using what is provided. One of the notebooks doesn't work (for the current release) without a separate install -but the developer/author helped me very quickly to get it working, so that doesn't appear to be a big issue.
On Quality of writing/Summary - it's generally reasonable, but I think a bit too terse. As a non-specialist, I have some idea of what problems are being solved, but I think just a few more sentences could make that much more clear. I am hung up on the end of the first sentence -- "geothermal" is an adjective and so requires a noun to describe. A quick edit from the perspective of improving readability for the non-specialist would, in my opinion, improve the document.
Thank you for your comments, @amandersillinois. To address your points.
Does that sound about right?
That sounds excellent to me!
I'm neither a specialist in well connectivity/groundwater nor a particularly adept programmer -- I'm just a user of numerical models. So I don't believe I'm qualified/able to comment two some things. Specifically: on Functionality-Performance - the article claims that the performance is better than an Excel workbook - but I did not test this claim and don't think I could do so in the time I can devote to this review. on Documentation -Functionality documentation - I don't know what API method documentation is and cannot comment.
Thanks for providing your review along with caveats @amandersillinois. To weigh in a bit, API documentation is basically the explanations associated with individual functions, describing what the function does, what parameters it takes as inputs, and what it returns. This is the API documentation for numpy.mean
for example.
@mgcooper - any chance you've got an update for us?
The suggested documentation improvements have been merged in this pull request: https://github.com/frank1010111/pywaterflood/pull/79
Thanks @frank1010111 - I suspect @mgcooper is not going to complete their review, so I'll be reaching out to other potential reviewers in the meantime, thanks for your understanding and patience.
Thank you for your efforts @elbeejay. Let me know if there's any way I can help.
My apologies for the poor communication on my part. Indeed I was unable to complete the review before departing for vacation. I am still happy to complete the review, but it could be delayed until I return 28 Feb. I will try my best to complete it asap. On Feb 15, 2024, at 1:23 PM, J. Hariharan @.***> wrote: Thanks @frank1010111 - I suspect @mgcooper is not going to complete their review, so I'll be reaching out to other potential reviewers in the meantime, thanks for your understanding and patience.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: @.***>
10.21105.joss.06191.pdf It was really simple to install pywaterflood and run the examples. I did a bare bones global install using pip and a "global" install using conda, both went smoothly. I then ran the "simple example" directly in a python repl (please see comments below on the simple example), and I ran the .ipynb notebook in an interactive jupyter session in vscode. All went smoothly, but I was surprised at how slow the code is. I don't know how computationally intensive the methods underlying the "simple example" are, but I would find it hard to use this software in a research setting (e.g., estimate parameter uncertainty) if it takes several minutes to run one simple example. Curious if the speed is typical for the field. For reference, the "7-minutes to pywaterflood" ran in about 8 min on my 2019 Quad-Core Macbook Pro with 16 GB ram. No worries if this is simply what it is, just curious if attention was paid to performance.
The "simple example" is also a bit underwhelming (although the notebook is much better). It produces no graphics or table to visualize the results, print anything to the screen, or provide any feedback to users. The main Readme say "Finally, run CRM and check the predictions and residuals" - but where is the "check" done? Clearly a user can do what I did - import pyplot and plot q_hat etc., but I had no idea what the data was or how to label the plot etc. All to say - it would be simple to port a plot command used in the notebook or print something to the screen to provide users with feedback in the repl or enable copying the simple example to a script.
I agree with @amandersillinois that the writing is a bit terse, even for JOSS. Please see the general remarks below and the comments in the attached paper.
General remarks following the JOSS review criteria:
Minor suggestions:
Thank you for your review and suggestions.
You can track my changes to the documentation in the above-linked issue.
Two points that do not have to go into the documentation:
Great, thanks for stepping in to clarify @mgcooper and starting the review - apologies for counting you out!
@frank1010111 it sounds like you are waiting on @mgcooper to merge https://github.com/frank1010111/pywaterflood/pull/81 and then close https://github.com/frank1010111/pywaterflood/issues/80. Is that right?
Yes, I'm looking for a thumb's up on the pull request saying that I correctly interpreted the suggestions and made acceptable changes.
Okay, I have addressed the review suggestions I've seen so far. Is there anything else I can do?
I am happy with it as is.
Looks good to me.
Great, thank you @amandersillinois and @mgcooper for taking the time to provide constructive reviews!
@amandersillinois would you mind checking off the outstanding check boxes in your reviewer checklist above? Thanks.
@frank1010111, I will review the repository and contents, and then get back to you shortly with next steps
OK - I think I've got all the boxes ticked.
OK - I think I've got all the boxes ticked.
Looks good, thank you!
@editorialbot check references
@editorialbot generate pdf
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.2118/83381-PA is OK
- 10.3390/en11123368 is OK
- 10.1016/j.petrol.2010.05.006 is OK
- 10.1109/MCSE.2011.37 is OK
- 10.1038/s41586-020-2649-2 is OK
- 10.25080/Majora-92bf1922-00a is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.1016/j.egypro.2013.06.290 is OK
- 10.2118/51793-PA is OK
- 10.2118/95322-PA is OK
MISSING DOIs
- No DOI given, and none found for title: Optimization of reinjection allocation in geotherm...
- No DOI given, and none found for title: Development and application of capacitance-resisti...
INVALID DOIs
- None
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@frank1010111 I'd ask that you consider including the full name for the MPI acronym (I realize you are introducing the module), but still think it could be helpful for the casual reader to see the phrase "multiwell productivity index", and it might help folks find the paper when conducting a search using that language.
I've checked too and it doesn't seem like those two references that the bot flagged have DOIs, so we will do without.
At this time I will generate a post-review checklist below that has a few final steps for you to complete/confirm for me. Once we've finished these steps I'll be able to recommend this paper for publication and an editor-in-chief will give it one final look before publishing it.
@editorialbot set <DOI here> as archive
@editorialbot set <version here> as version
@editorialbot generate pdf
@editorialbot check references
and ask author(s) to update as needed@editorialbot recommend-accept
@frank1010111 if you can just let me know once you've done a release of the reviewed software with a version number that'd be great. I'll also need the link and DOI associated with the archive you create on zenodo/figshare/whichever archival service you use. Thanks!
I've released the latest version of pywaterflood as v0.3.3. Here is the DOI: https://zenodo.org/doi/10.5281/zenodo.6569905
@editorialbot set v0.3.3 as version
Done! version is now v0.3.3
@editorialbot set 10.5281/zenodo.10815882 as archive
Done! archive is now 10.5281/zenodo.10815882
@editorialbot recommend-accept
Looks good to me @frank1010111, I've recommended your submission for publication and sometime in the next few weeks an Editor in Chief will review it and hopefully officially publish it.
Big thanks again to @amandersillinois and @mgcooper for reviewing for JOSS!
Attempting dry run of processing paper acceptance...
Submitting author: !--author-handle-->@frank1010111<!--end-author-handle-- (Frank Male) Repository: https://github.com/frank1010111/pywaterflood Branch with paper.md (empty if default branch): Version: v0.3.3 Editor: !--editor-->@elbeejay<!--end-editor-- Reviewers: @mgcooper, @amandersillinois Archive: 10.5281/zenodo.10815882
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@mgcooper & @amandersillinois, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @elbeejay know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @amandersillinois
📝 Checklist for @mgcooper