Closed editorialbot closed 6 months ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.88 T=0.18 s (388.0 files/s, 167598.5 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
C/C++ Header 3 29 26 12680
Python 32 1135 1363 5117
C++ 2 255 192 3224
XML 1 0 0 589
TeX 1 17 0 559
Jupyter Notebook 8 0 2795 538
Markdown 4 59 0 278
YAML 1 16 18 121
HTML 3 6 1 65
Dockerfile 1 6 5 56
reStructuredText 9 41 55 47
CSS 1 11 8 38
SVG 1 0 0 23
TOML 1 0 0 3
-------------------------------------------------------------------------------
SUM: 68 1575 4463 23338
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Wordcount for paper.md
is 684
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1080/23746149.2022.2093129 is OK
- 10.1021/acs.chemrev.0c01111 is OK
- 10.1103/PhysRevB.104.104309 is OK
- 10.1088/1361-648X/ac462b is OK
- 10.1063/5.0106617 is OK
- 10.1103/PhysRevB.108.054312 is OK
- 10.48550/arXiv.2301.03497 is OK
- 10.1038/s42005-023-01297-8 is OK
- 10.1016/j.mtphys.2023.101066 is OK
- 10.1088/1361-648X/aa680e is OK
- 10.1002/adts.201800184 is OK
- 10.7566/JPSJ.92.012001 is OK
- 10.1088/1361-648X/acd831 is OK
- 10.48550/arXiv.2304.06978 is OK
MISSING DOIs
- None
INVALID DOIs
- None
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@Chronum94, @naik-aakash - how are the reviews going? If you have any questions about the process, please ask. @Chronum94 your first reviewer task it to generate checklist (see info about this at the top of the thread).
Hi @lucydot , I am still reviewing the examples and code functionalities demonstrated. So far going Well. Only a few more things to check from my end in this aspect. Then I will make a summary comment here.
Hi @lucydot I expect to be done with the review by the end of this work week (US Eastern time).
Hi @lucydot, I have finished with my review now. I feel the calorine
package is a worthy addition to the ever-growing tools in the fields of computational chemistry, physics, and materials science. The Python interface for model construction and analysis will be helpful in creating workflows, for example, in packages like atomate2 with minimal further work. The code is sufficiently documented with many example use cases; this helps to get familiar with code functionalities quickly and get started right away with ease. I also used the tools provided to test other NEP models available using the example codes, and the code worked very well on these as well.
Thank you, @lucydot, for inviting me to review this code; I very much enjoyed reviewing this package and am seriously considering trying to do more experiments with this tool in the near future.
I have some minor suggestions and opened issues on the source code repository. Some of them have already been addressed by @elindgren very quickly.
Thank you for the review and the nice comments @naik-aakash! I'd be happy to answer your questions. :smile:
While the submitting author @elindgren has made a substantial contribution to this code, the list of contributors on the repository has some more names; I could not figure out what exactly was their contribution from the repo and why they are not included on the paper. Maybe it is something the authors can clarify.
Everyone who has made substantial contributions are on the paper. The exceptions are people who either made small contributions or had no long-term stake in the development, or asked not to be included as they felt they hadn't contributed sufficiently to be a co-author.
I am interested in knowing from the authors where NEP stands on the ladder among the recent MLIP potentials like CHGNet, MACE, and M3GNet for single and multicomponent systems. Any insights on these aspects would be great!
NEP is an approach that focuses on maximizing accuracy without sacrificing computational efficiency, and fares well against many other MLPs. See for instance this paper where we present and discuss the NEP approach in general, and this preprint where NEP is compared to SA-GPR for simulating IR and Raman spectra.
How hard would it be to build a universal NEP potential?
We have explored this question to some extent in our recent preprint where we develop a NEP for up to 16 elements, with applications in high-entropy alloys.
Hope this clarifies things!
I'm done with the review. The package looks well-designed and quite good. It has a clear problem statement in mind and in design (to run and analyse NEP models) and the first-class support for the somewhat-de-facto standard of an ASE calculator is quite appealing in terms of barrier to entry. The supporting literature appears to be well-founded and the documentation is quite complete. The authors have put in sincere and long-term effort into the codebase, and both the code quality and documentation quality reflects this.
There are some minor changes that I have suggested to the tutorial in the documentation which are by no means exclusive suggestions but more just pointing out minor things that can be improved to make the package more accessible to a new user. They almost exclusively focus on making the code in the documentation more amenable to 'I copy-pasted it, I want it to run.'
The package works as intended (to the parts I can test on my own hardware, unfortunately not the GPUNEP calculator). I think with the minor changes to the documentation suggested in the issue linked above, or even a good-faith word to attend to the issues in a timely manner will make it okay from my side,
Other questions I had in mind have (very fortuitously) already been answered in the above response by @elindgren for which I thank the author (and @naik-aakash for asking the questions).
Thank you for the review and the nice comments @naik-aakash! I'd be happy to answer your questions. 😄
While the submitting author @elindgren has made a substantial contribution to this code, the list of contributors on the repository has some more names; I could not figure out what exactly was their contribution from the repo and why they are not included on the paper. Maybe it is something the authors can clarify.
Everyone who has made substantial contributions are on the paper. The exceptions are people who either made small contributions or had no long-term stake in the development, or asked not to be included as they felt they hadn't contributed sufficiently to be a co-author.
I am interested in knowing from the authors where NEP stands on the ladder among the recent MLIP potentials like CHGNet, MACE, and M3GNet for single and multicomponent systems. Any insights on these aspects would be great!
NEP is an approach that focuses on maximizing accuracy without sacrificing computational efficiency, and fares well against many other MLPs. See for instance this paper where we present and discuss the NEP approach in general, and this preprint where NEP is compared to SA-GPR for simulating IR and Raman spectra.
How hard would it be to build a universal NEP potential?
We have explored this question to some extent in our recent preprint where we develop a NEP for up to 16 elements, with applications in high-entropy alloys.
Hope this clarifies things!
Thanks @elindgren ! It certainly does clarify things and will surely check out the sources shared 😃
I'm done with the review. The package looks well-designed and quite good. It has a clear problem statement in mind and in design (to run and analyse NEP models) and the first-class support for the somewhat-de-facto standard of an ASE calculator is quite appealing in terms of barrier to entry. The supporting literature appears to be well-founded and the documentation is quite complete. The authors have put in sincere and long-term effort into the codebase, and both the code quality and documentation quality reflects this.
There are some minor changes that I have suggested to the tutorial in the documentation which are by no means exclusive suggestions but more just pointing out minor things that can be improved to make the package more accessible to a new user. They almost exclusively focus on making the code in the documentation more amenable to 'I copy-pasted it, I want it to run.'
The package works as intended (to the parts I can test on my own hardware, unfortunately not the GPUNEP calculator). I think with the minor changes to the documentation suggested in the issue linked above, or even a good-faith word to attend to the issues in a timely manner will make it okay from my side,
Other questions I had in mind have (very fortuitously) already been answered in the above response by @elindgren for which I thank the author (and @naik-aakash for asking the questions).
Thank you for the nice review and comments @Chronum94! I'm currently addressing yours as well as @naik-aakash s' suggestions for improvement, will let you know here once it is done.
@naik-aakash @Chronum94 I've addressed your comments and suggestions in this MR.
@naik-aakash I've also added a sentence about modifying NEP models via calorine
to the draft.
@editorialbot generate pdf
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Excellent @elindgren !
And thank-you @naik-aakash and @Chronum94 for donating your expertise - and for enabling a thoughtful and timely review process :)
All checkboxes are ticked, and I can see the points raised have been addressed in latest MR, so we will proceed to the final stage.
I'll generate the post-review checklist - @elindgren please let me know when your items are complete.
@editorialbot set <DOI here> as archive
@editorialbot set <version here> as version
@editorialbot generate pdf
@editorialbot check references
and ask author(s) to update as needed@editorialbot recommend-accept
@editorialbot generate pdf
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1080/23746149.2022.2093129 is OK
- 10.1021/acs.chemrev.0c01111 is OK
- 10.1103/PhysRevB.104.104309 is OK
- 10.1088/1361-648X/ac462b is OK
- 10.1063/5.0106617 is OK
- 10.1103/PhysRevB.108.054312 is OK
- 10.48550/arXiv.2301.03497 is OK
- 10.1038/s42005-023-01297-8 is OK
- 10.1016/j.mtphys.2023.101066 is OK
- 10.1088/1361-648X/aa680e is OK
- 10.1002/adts.201800184 is OK
- 10.7566/JPSJ.92.012001 is OK
- 10.1088/1361-648X/acd831 is OK
- 10.48550/arXiv.2304.06978 is OK
MISSING DOIs
- None
INVALID DOIs
- None
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot generate pdf
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1080/23746149.2022.2093129 is OK
- 10.1021/acs.chemrev.0c01111 is OK
- 10.1103/PhysRevB.104.104309 is OK
- 10.1088/1361-648X/ac462b is OK
- 10.1063/5.0106617 is OK
- 10.1103/PhysRevB.108.054312 is OK
- 10.1021/acs.jpcc.3c01542 is OK
- 10.1038/s42005-023-01297-8 is OK
- 10.1016/j.mtphys.2023.101066 is OK
- 10.1088/1361-648X/aa680e is OK
- 10.1002/adts.201800184 is OK
- 10.7566/JPSJ.92.012001 is OK
- 10.1088/1361-648X/acd831 is OK
- 10.1021/acsnano.3c09717 is OK
MISSING DOIs
- None
INVALID DOIs
- None
Excellent @elindgren !
And thank-you @naik-aakash and @Chronum94 for donating your expertise - and for enabling a thoughtful and timely review process :)
All checkboxes are ticked, and I can see the points raised have been addressed in latest MR, so we will proceed to the final stage.
I'll generate the post-review checklist - @elindgren please let me know when your items are complete.
Great! Will do! I've checked the Orcids and updated two of the references in the paper which have gone from preprints to published articles. Will let you know here once we've made a new release including the latest MR.
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Hi @lucydot, we've merged the remaining issues and made a new release of calorine
. The new version is 2.2, associated with the following Zenodo record: https://doi.org/10.5281/zenodo.10689789
I've checked that both the authors and licenses matches between the paper and the repository/zenodo record. I think that should be everything from our side, right?
Hi @lucydot, we've merged the remaining issues and made a new release of
calorine
. The new version is 2.2, associated with the following Zenodo record: https://doi.org/10.5281/zenodo.10689789I've checked that both the authors and licenses matches between the paper and the repository/zenodo record. I think that should be everything from our side, right?
There seems to be an issue with the recent release we made (version 2.2), I'll let you know here once it is resolved. So please don't mark version 2.2 as the one in the paper :rofl:
Thanks for updates @elindgren - I'll hold off with my final checks until you let me know here the issue is resolved. You will also need to make update the Zenodo accordingly and give me the new doi.
Hey @lucydot we've fixed the issue now. The new version of calorine is 2.2.1, which has the following Zenodo DOI: https://zenodo.org/records/10723374.
@editorialbot set v2.2.1 as version
Done! version is now v2.2.1
@editorialbot set 10.5281/zenodo.10723374 as archive
Done! archive is now 10.5281/zenodo.10723374
@elindgren - I'm just reading through the archive. For the "citation" and "published in" detail it points to the GPUMD package (https://pubs.aip.org/aip/jcp/article-abstract/157/11/114801/2841888/GPUMD-A-package-for-constructing-accurate-machine). Is this an oversight or deliberate? My expectation would be this metadata is updated to back to JOSS once the JOSS paper is published (so the JOSS points to the Zenodo archive, and vice-verca).
@editorialbot generate pdf
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1080/23746149.2022.2093129 is OK
- 10.1021/acs.chemrev.0c01111 is OK
- 10.1103/PhysRevB.104.104309 is OK
- 10.1088/1361-648X/ac462b is OK
- 10.1063/5.0106617 is OK
- 10.1103/PhysRevB.108.054312 is OK
- 10.1021/acs.jpcc.3c01542 is OK
- 10.1038/s42005-023-01297-8 is OK
- 10.1016/j.mtphys.2023.101066 is OK
- 10.1088/1361-648X/aa680e is OK
- 10.1002/adts.201800184 is OK
- 10.7566/JPSJ.92.012001 is OK
- 10.1088/1361-648X/acd831 is OK
- 10.1021/acsnano.3c09717 is OK
MISSING DOIs
- None
INVALID DOIs
- None
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@elindgren - I'm just reading through the archive. For the "citation" and "published in" detail it points to the GPUMD package (https://pubs.aip.org/aip/jcp/article-abstract/157/11/114801/2841888/GPUMD-A-package-for-constructing-accurate-machine). Is this an oversight or deliberate? My expectation would be this metadata is updated to back to JOSS once the JOSS paper is published (so the JOSS points to the Zenodo archive, and vice-verca).
Yes, the plan is to update the metadata to point to the JOSS as soon as we have a DOI here. 😊
@editorialbot recommend-accept
Attempting dry run of processing paper acceptance...
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1080/23746149.2022.2093129 is OK
- 10.1021/acs.chemrev.0c01111 is OK
- 10.1103/PhysRevB.104.104309 is OK
- 10.1088/1361-648X/ac462b is OK
- 10.1063/5.0106617 is OK
- 10.1103/PhysRevB.108.054312 is OK
- 10.1021/acs.jpcc.3c01542 is OK
- 10.1038/s42005-023-01297-8 is OK
- 10.1016/j.mtphys.2023.101066 is OK
- 10.1088/1361-648X/aa680e is OK
- 10.1002/adts.201800184 is OK
- 10.7566/JPSJ.92.012001 is OK
- 10.1088/1361-648X/acd831 is OK
- 10.1021/acsnano.3c09717 is OK
MISSING DOIs
- No DOI given, and none found for title: PyNEP
- No DOI given, and none found for title: Gpyumd
INVALID DOIs
- None
:wave: @openjournals/pe-eics, this paper is ready to be accepted and published.
Check final proof :point_right::page_facing_up: Download article
If the paper PDF and the deposit XML files look good in https://github.com/openjournals/joss-papers/pull/5095, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept
I've just looked up the two titles
PyNEP
andGpuymd
flagged by editorialbot and cannot find DOI's associated with either project, so I do not think any adjustment needed to paper.
Alright, perfect! What is the next step? Is it you or I that run @ editorialbot accept
(intentionally misspelt to not trigger the bot)?
It's out of our hands now @elindgren - one of the editors-in-chief will do their final checks. You can sit back, relax and run a NEP calculation :)
Submitting author: !--author-handle-->@elindgren<!--end-author-handle-- (Eric Lindgren) Repository: https://gitlab.com/materials-modeling/calorine Branch with paper.md (empty if default branch): joss-paper Version: v2.2.1 Editor: !--editor-->@lucydot<!--end-editor-- Reviewers: @Chronum94, @naik-aakash Archive: 10.5281/zenodo.10723374
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@Chronum94 & @naik-aakash, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @lucydot know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @naik-aakash
📝 Checklist for @Chronum94