Closed editorialbot closed 3 months ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.88 T=0.20 s (335.0 files/s, 237153.2 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
JSON 1 0 0 33315
Python 28 1810 3736 3951
Markdown 23 541 0 1907
SVG 1 1 1 738
YAML 9 37 30 304
TeX 1 0 0 189
INI 1 9 0 82
TOML 1 7 2 80
Jupyter Notebook 2 0 1312 54
make 1 6 8 15
-------------------------------------------------------------------------------
SUM: 68 2411 5089 40635
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Wordcount for paper.md
is 1316
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1007/s10953-019-00871-5 is OK
- 10.1016/j.desal.2013.03.015 is OK
- 10.1039/C7NJ03597G is OK
- 10.1016/j.cageo.2011.02.005 is OK
- 10.1016/s0378-3812(02)00178-4 is OK
- 10.1016/j.commatsci.2012.10.028 is OK
- 10.1063/1.4812323 is OK
- 10.1021/je2009329 is OK
- 10.1016/S0016-7037(97)81133-7 is OK
MISSING DOIs
- None
INVALID DOIs
- None
@editorialbot commands
Hello @rkingsbury, here are the things you can ask me to do:
# List all available commands
@editorialbot commands
# Get a list of all editors's GitHub handles
@editorialbot list editors
# Check the references of the paper for missing DOIs
@editorialbot check references
# Perform checks on the repository
@editorialbot check repository
# Adds a checklist for the reviewer using this command
@editorialbot generate my checklist
# Set a value for branch
@editorialbot set joss-paper as branch
# Generates the pdf paper
@editorialbot generate pdf
# Generates a LaTeX preprint file
@editorialbot generate preprint
# Get a link to the complete list of reviewers
@editorialbot list reviewers
@lucydot and reviewers, thank you for agreeing to review! Because some time has passed since my original submission, if possible please review the latest version (v0.11.1
) rather than the version listed at the top of the thread.
@rkingsbury no problem, I'll do that version!
@editorialbot generate my checklist doesn't work like this, see https://github.com/openjournals/joss-reviews/issues/6295#issuecomment-1915376354
FYI, editorialbot commands need to be the first thing in a new comment
@editorialbot set v0.11.1 as version
Done! version is now v0.11.1
Hi @rkingsbury @JacksonBurns @orionarcher - how are your reviews going?
@lucydot I think you mean to tag @yuxuanzhuang . (I am the author!)
Thanks for the ping @lucydot, review is ongoing on my end. I will be completed within the allotted time.
Awesome work from @rkingsbury. I encountered no issues installing the package or following along with the examples. This work already appears to be widely used (49 stars) and the buildout of documentation and functionality is sure to make it more useful. I feel this work meets the standards of JOSS and should be accepted.
Installation and testing was seamless. I was able to follow the examples without issues. I tried entering some non-physical solutions and some ions not in the database and was met with reasonable warnings, which I appreciate.
I will add that the review of 'competing' packages was especially thorough! Well done. Paper overall is concise and convincing, and the software seems outstanding in general.
There are a few small errors I stumbled across that should be fixed, but after that this package looks ready!
List of some small things:
@
in your paper on this line: https://github.com/KingsburyLab/pyEQL/blob/fd35750721f4c013091cab2c14fee1a5934dc1be/docs/joss/paper.md?plain=1#L66:::
on this line of the docs: https://github.com/KingsburyLab/pyEQL/blob/c43b2e6d97b3cfa7c023fe75405e85c9dbbbed67/docs/contributing.md?plain=1#L11[Sphinx]
(?): https://github.com/KingsburyLab/pyEQL/blob/c43b2e6d97b3cfa7c023fe75405e85c9dbbbed67/docs/contributing.md?plain=1#L20One larger idea, which is out of scope of this review but I thought I might share in case it is helpful (feel free to ignore):
I am sure you are aware of this because of your very thorough requirements directory, but the dependencies are pretty expansive (pip install pyEQL
in a bare python 3.9 conda environment installed 94 packages for me). I generated this dependency graph:
and it seems that most of the bulk is coming from maggma
and pymatgen
. If most of your end users are within the materials project world I don't see this being a problem, but it may impact reuse potential for those trying to extend pyEQL
into other domains. If possible, could these be removed or made optional (to maintain optional perfect compatibility with materials project)? A quick search of the pyEQL
repo shows that pymatgen
(as also described in the README) is only used to standardize some input data - could this be achieved with another (smaller) package? Similarly for maggma
, search (and README) show it used only for serializing and de-serealizing - could you wrap pymongo
yourself and remove maggma
?
Installation and testing was seamless. I was able to follow the examples without issues. I tried entering some non-physical solutions and some ions not in the database and was met with reasonable warnings, which I appreciate.
I will add that the review of 'competing' packages was especially thorough! Well done. Paper overall is concise and convincing, and the software seems outstanding in general.
Thank you very much @JacksonBurns ! I appreciate your thorough testing and identifying the documentation issues.
One larger idea, which is out of scope of this review but I thought I might share in case it is helpful (feel free to ignore):
I am sure you are aware of this because of your very thorough requirements directory, but the dependencies are pretty expansive (
pip install pyEQL
in a bare python 3.9 conda environment installed 94 packages for me). I generated this dependency graph: tree
Great suggestions. I have thought about this in the abstract but I did not realize the extent of the dependency bloat until seeing the tree you generated (aside: I was unaware of pipdeptree
, so thanks for that!). I am a maintainer of maggma
and I think we should explore more optional dependency groups to manage that, because many of those are not needed for basic mongo-style querying. Or, as you suggest, for the specific ways we use maggma
in pyEQL
, we might be able to get away with a custom solution.
You are correct that the main purpose of having pymatgen
in here is so that formulas can be standardized in a way that pymatgen
will understand, and also to leverage pymatgen
for chemical informatics such as molecular weights, oxidation states, etc. based on those formulas. There are alternatives, but I know that efforts are underway to make pymatgen
itself leaner so I think it's likely I will stick with it in pyEQL
for the near future.
It looks like this is progressing very nicely - thanks for your thorough reviews @orionarcher and @JacksonBurns β¨
@yuxuanzhuang - how is your review going? I can see most items are checked, there are a couple of outstanding points in documentation.
I love the features of this package and even plan to add it to my research toolbox. I've started a separate issue about documentation issues in the repository, which the author is already addressing. Aside from that, the software is well-documented and the tests are abundant.
I do have an additional comment/question: It appears that the package does not support Mac computers equipped with Apple silicon (although it can be installed without any issue). This seems to be due to the phreeqpython
package lacks support for it:
> s1 = Solution()
OSError: dlopen(/xxx/lib/python3.10/site-packages/phreeqpython/./lib/viphreeqc.dylib, 0x0006): tried: '/xxx/lib/python3.10/site-packages/phreeqpython/./lib/viphreeqc.dylib' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')),
Perhaps it's better to document this compatibility issue somewhere?
I would definitely recommend it to be accepted. Great work!!!
I love the features of this package and even plan to add it to my research toolbox. I've started a separate issue about documentation issues in the repository, which the author is already addressing. Aside from that, the software is well-documented and the tests are abundant.
I would definitely recommend it to be accepted. Great work!!!
Thank you very much @yuxuanzhuang ! I'm thrilled to hear that you might find pyEQL
useful in your own research.
I do have an additional comment/question: It appears that the package does not support Mac computers equipped with Apple silicon (although it can be installed without any issue). This seems to be due to the
phreeqpython
package lacks support for it:> s1 = Solution() OSError: dlopen(/xxx/lib/python3.10/site-packages/phreeqpython/./lib/viphreeqc.dylib, 0x0006): tried: '/xxx/lib/python3.10/site-packages/phreeqpython/./lib/viphreeqc.dylib' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')),
Perhaps it's better to document this compatibility issue somewhere?
This is an excellent finding that I had not considered before, thank you for reporting.
@lucydot it looks like reviews are complete. What are the next steps?
Excellent - I can see some of the issues raised as part of the review are not yet addressed, though reading the discussion it seems that these are not acceptance-blockers but suggestions for releases beyond. Great work @rkingsbury and team, and for your reviews @yuxuanzhuang, @orionarcher, @JacksonBurns.
I'm going to generate the post-review checklist - please let me know when each item is complete @rkingsbury.
@editorialbot set <DOI here> as archive
@editorialbot set <version here> as version
@editorialbot generate pdf
@editorialbot check references
and ask author(s) to update as needed@editorialbot recommend-accept
@lucydot all items complete! The version is v0.14.0
and the Zenodo DOI is https://doi.org/10.5281/zenodo.10783921 (for this specific version). As I'm sure you know, Zenodo also provides an "all versions" DOI.
Thanks @rkingsbury
We usually recommend semantic versioning, where a 0.x.x release would suggest the software is in initial development and unstable. Would you consider moving to a major release? To be clear, it is not an acceptance blocker, rather a recommendation. We understand there may be exceptions for various reasons.
@editorialbot set 10.5281/zenodo.8332915 as archive
Done! archive is now 10.5281/zenodo.8332915
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@rkingsbury - the title of the Zenodo release needs to exactly match that in the JOSS paper - can you update?
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1007/s10953-019-00871-5 is OK
- 10.1016/j.desal.2013.03.015 is OK
- 10.1039/C7NJ03597G is OK
- 10.1016/j.cageo.2011.02.005 is OK
- 10.1016/s0378-3812(02)00178-4 is OK
- 10.1016/j.commatsci.2012.10.028 is OK
- 10.1063/1.4812323 is OK
- 10.1021/je2009329 is OK
- 10.1016/S0016-7037(97)81133-7 is OK
MISSING DOIs
- No DOI given, and none found for title: Description of Input and Examples for PHREEQC Vers...
- No DOI given, and none found for title: Ionic Conductivity and Diffusion at Infinite Dilut...
- No DOI given, and none found for title: PyEquIon: A Python Package For Automatic Speciatio...
- No DOI given, and none found for title: PyEquIon: A Python Package For Automatic Speciatio...
- No DOI given, and none found for title: Phreeqpython
- No DOI given, and none found for title: Pint: makes units easy
- No DOI given, and none found for title: Maggma: A files-to-API data pipeline for scientifi...
- No DOI given, and none found for title: The Geochemistβs Workbench, Release 17
INVALID DOIs
- None
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1007/s10953-019-00871-5 is OK
- 10.1016/j.desal.2013.03.015 is OK
- 10.1039/C7NJ03597G is OK
- 10.1016/j.cageo.2011.02.005 is OK
- 10.1016/s0378-3812(02)00178-4 is OK
- 10.1016/j.commatsci.2012.10.028 is OK
- 10.1063/1.4812323 is OK
- 10.1021/je2009329 is OK
- 10.1016/S0016-7037(97)81133-7 is OK
MISSING DOIs
- No DOI given, and none found for title: Description of Input and Examples for PHREEQC Vers...
- No DOI given, and none found for title: Ionic Conductivity and Diffusion at Infinite Dilut...
- No DOI given, and none found for title: PyEquIon: A Python Package For Automatic Speciatio...
- No DOI given, and none found for title: PyEquIon: A Python Package For Automatic Speciatio...
- No DOI given, and none found for title: Phreeqpython
- No DOI given, and none found for title: Pint: makes units easy
- No DOI given, and none found for title: Maggma: A files-to-API data pipeline for scientifi...
- No DOI given, and none found for title: The Geochemistβs Workbench, Release 17
INVALID DOIs
- None
One more comment π ...
Whilst reading through paper @rkingsbury I've spotted some adjustments to make:
Marcellos, C. F. C.
. One points to a pre-print, and one to a Github repository. If you judge the repository link is required (ie, important enough that can't be followed via the pre-print) then I recommend embedding this link in the main text. phreeqpython
and pint
.;
on line 31 with .
@citation
instead of [@citation]
Once you make adjustments, please check they are as expected using @editorialbot generate pdf
.
Hello all, a quick message to say I will be out of office from tomorrow, and will check back here on the 18th.
Hello all, a quick message to say I will be out of office from tomorrow, and will check back here on the 18th.
OK, thanks for letting me know @lucydot ! I like your idea about bumping the version to 1.0 (I was thinking about this even before you posted). There are a few small changes I want to make before I do so, but I'll take care of those soon and let you know when ready.
One question - after I make the requested corrections to the manuscript, approximately how long does final acceptance usually take? I ask because I am going to feature pyEQL in a conference talk the week of the 18th, and I'd love to be able to say it has been accepted.
(If helpful, I can try to make changes to the manuscript and Zenodo title today)
Marcellos, C. F. C.
. One points to a pre-print, and one to a Github repository. If you judge the repository link is required (ie, important enough that can't be followed via the pre-print) then I recommend embedding this link in the main text.phreeqpython
and pint
.;
on line 31 with .
@citation
instead of [@citation]
@editorialbot generate pdf
.@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot generate pdf
Submitting author: !--author-handle-->@rkingsbury<!--end-author-handle-- (Ryan Kingsbury) Repository: https://github.com/rkingsbury/pyEQL Branch with paper.md (empty if default branch): joss Version: v1.0.0 Editor: !--editor-->@lucydot<!--end-editor-- Reviewers: @orionarcher, @JacksonBurns, @yuxuanzhuang Archive: 10.5281/zenodo.8332915
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@orionarcher & @JacksonBurns & @yuxuanzhuang, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @lucydot know.
β¨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest β¨
Checklists
π Checklist for @JacksonBurns
π Checklist for @yuxuanzhuang
π Checklist for @orionarcher