Closed editorialbot closed 1 year ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.88 T=0.05 s (418.5 files/s, 112334.6 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
C++ 3 702 544 2174
Markdown 9 130 0 479
TeX 1 39 0 449
C/C++ Header 3 166 96 309
Python 2 52 46 137
Jupyter Notebook 1 0 676 61
Bourne Shell 3 27 8 56
XML 1 7 8 7
-------------------------------------------------------------------------------
SUM: 23 1123 1378 3672
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Wordcount for paper.md
is 2584
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1126/science.1127647 is OK
- 10.1073/pnas.1517384113 is OK
- 10.1126/science.1165893 is OK
- 10.1016/j.physd.2013.09.002 is OK
- 10.1016/S0038-1098(96)00718-1 is OK
- 10.1143/JPSJ.78.041005 is OK
- 10.1017/CBO9781139167864 is OK
- 10.1002/jcc.540040211 is OK
- 10.2172/10176421 is OK
- 10.1038/nature14956 is OK
- 10.1557/mrs.2011.30 is OK
- 10.1016/j.jsb.2006.10.023 is OK
- 10.1098/rsta.2008.0219 is OK
- 10.1038/s43588-021-00034-x is OK
- 10.1007/978-3-319-01905-5_60-2 is OK
- 10.1088/1361-651x/ab7150 is OK
- 10.1007/978-3-540-28650-9_4 is OK
- 10.1007/978-0-387-84858-7 is OK
- 10.1016/0079-6816(93)90013-l is OK
- 10.1093/acprof:oso/9780199652952.001.0001 is OK
- 10.1021/acsomega.8b01393 is OK
- 10.5281/zenodo.7897824 is OK
- 10.1080/01621459.2017.1285773 is OK
MISSING DOIs
- 10.2139/ssrn.4455789 may be a valid DOI for title: SDYN-GANs: Adversarial Learning Methods for Multistep Generative Models for General Order Stochastic Dynamics
INVALID DOIs
- None
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
If you have any questions about the review process just ping me here please!
Thanks for reviewing.
On Fri, Jul 14, 2023, 9:01 AM JColdstream @.***> wrote:
Review checklist for @JColdstream https://github.com/JColdstream Conflict of interest
- I confirm that I have read the JOSS conflict of interest (COI) policy https://github.com/openjournals/joss/blob/master/COI.md and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.
Code of Conduct
- I confirm that I read and will adhere to the JOSS code of conduct https://joss.theoj.org/about#code_of_conduct.
General checks
- Repository: Is the source code for this software available at the https://github.com/atzberg/mlmod?
- License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved https://opensource.org/licenses/alphabetical software license?
- Contribution and authorship: Has the submitting author @.*** https://github.com/atzberg) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
- Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines https://joss.readthedocs.io/en/latest/submitting.html#substantial-scholarly-effort
- Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
- Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
- Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy https://joss.readthedocs.io/en/latest/policies.html?highlight=animal#joss-policies? If the paper contains no such data, please check this item.
Functionality
- Installation: Does installation proceed as outlined in the documentation?
- Functionality: Have the functional claims of the software been confirmed?
- Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)
Documentation
- A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
- Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
- Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
- Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
- Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
- Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support
Software paper
- Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
- A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
- State of the field: Do the authors describe how this software compares to other commonly-used packages?
- Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
- References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax https://pandoc.org/MANUAL.html#extension-citations?
— Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/5620#issuecomment-1635832529, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACBHSZRIO4TCCHOPQE5NPIDXQE7L5ANCNFSM6AAAAAAZ7DOR5I . You are receiving this because you were mentioned.Message ID: @.***>
Did everything install alright with the package. Any updates?
On Mon, Jul 31, 2023 at 12:23 PM Paul Atzberger @.***> wrote:
Thanks for reviewing.
On Fri, Jul 14, 2023, 9:01 AM JColdstream @.***> wrote:
Review checklist for @JColdstream https://github.com/JColdstream Conflict of interest
- I confirm that I have read the JOSS conflict of interest (COI) policy https://github.com/openjournals/joss/blob/master/COI.md and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.
Code of Conduct
- I confirm that I read and will adhere to the JOSS code of conduct https://joss.theoj.org/about#code_of_conduct.
General checks
- Repository: Is the source code for this software available at the https://github.com/atzberg/mlmod?
- License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved https://opensource.org/licenses/alphabetical software license?
- Contribution and authorship: Has the submitting author @.*** https://github.com/atzberg) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
- Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines https://joss.readthedocs.io/en/latest/submitting.html#substantial-scholarly-effort
- Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
- Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
- Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy https://joss.readthedocs.io/en/latest/policies.html?highlight=animal#joss-policies? If the paper contains no such data, please check this item.
Functionality
- Installation: Does installation proceed as outlined in the documentation?
- Functionality: Have the functional claims of the software been confirmed?
- Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)
Documentation
- A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
- Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
- Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
- Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
- Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
- Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support
Software paper
- Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
- A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
- State of the field: Do the authors describe how this software compares to other commonly-used packages?
- Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
- References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax https://pandoc.org/MANUAL.html#extension-citations?
— Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/5620#issuecomment-1635832529, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACBHSZRIO4TCCHOPQE5NPIDXQE7L5ANCNFSM6AAAAAAZ7DOR5I . You are receiving this because you were mentioned.Message ID: @.***>
On Ubuntu 22.04:
The installation worked nicely with the python package.
Installing it from source was also OK, the compilation instructions in ./src/INSTALL.md are good. There are also some (old?) instructions in ./lib/mlmod/INSTALL.md which could be updated to be the same as ./src/INSTALL.md.
The only issue was I had to edit the environment variable LD_LIBRARY_PATH to get it to run, but could just be a product of where I installed the dependencies.
"./lmp_mlmod_serial: error while loading shared libraries: libmlmod.so: cannot open shared object file: No such file or directory"
and
"./lmp_mlmod_serial: error while loading shared libraries: libtorch_cpu.so: cannot open shared object file: No such file or directory"
Was fixed by pointing to them at runtime with:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/to/mlmod/:/path/to/libtorch/lib/
@atzberg The example runs and gives me a nice trajectory, but have you got any checks to make sure the code is actually giving the right answer? Either something similar to pytest or something like an example calculation/dataset to compare to maybe? Just looking for something for the Automated tests section in the review.
Other things: I really like the little intro video on YouTube. LAMMPS has a new citation that they prefer to be used (https://doi.org/10.1016/j.cpc.2021.108171) rather than the old Plimpton 1995 one.
Look like things are moving along nicely here. Let me know if you need anything from me.
Thanks for this feedback and comments. Working on the above items.
I'll try to organise my recommendations in relevant sections
The installation using pip is really simple, but it contains an older version of LAMMPS (29 Oct 2020).
If possible, it would be preferable to upload the wheel file to pypi.org so that users could simply pip install mlmod
.
Compiling MLMOD from source has quite a few steps, and while it's not overly complicated, I would not recommend it for anyone not used to compiling software with (mildly) complex dependencies. It could be made easier if there was a more detailed example script containing all the commands and some example paths. In Ubuntu 22.04, FFTW3 (libfftw3-dev), LIBLAPACKE (liblapacke-dev), and tinyxml2 (libtinyxml2-dev) are available to install as packages, leaving the user to install only libtorch (which is very simple) and the MLMOD-patched version of LAMMPS.
With the recent change in LAMMPS development choice to provide a newbie-friendly cmake
compilation instructions rather than make, it would be useful to provide instructions for compiling MLMOD using cmake
.
Furthermore, having a shell script that copies all the files MLMOD needs (src/USER-MLMOD, lib, MAKE) to a user-specified location where the LAMMPS source files are would remove several steps and sources of potential error.
Finally, consider submitting the USER-MLMOD package to the LAMMPS repository, so that it may be included in the general release of LAMMPS.
On the main README, you should link the sentence "For more information on other ways to install or compile the package, please see the documentation page." to the src/INSTALL.md
file.
I agree with @JColdstream here, some tests to make sure that the package is returning expected results would be very useful. It could help identify compatibility issues with certain LAMMPS or library versions, should they exist.
I think this needs to be created. At a minimum, there needs to be documentation on the LAMMPS commands that MLMOD introduces, and all possible parameters they accept. Ideally, these would follow the same structure that the LAMMPS documentation already has, so that the familiar structure would help users already familiar with LAMMPS.
Overall well written, I have only found a few minor typos (I'm providing the lines as numbered on the PDF created by the bot above):
accomodated
line 32accomodate
line 59simiplified
line 93Nielsen:2000
line 110 (missing @
to make it a reference, I believe)Thanks for these comments and feedback. I will work on these items and update the repo soon. -- Paul
On Fri, Aug 11, 2023 at 6:31 AM Rui Apóstolo @.***> wrote:
I'll try to organise my recommendations in relevant sections Installation pip
The installation using pip is really simple, but it contains an older version of LAMMPS (29 Oct 2020). If possible, it would be preferable to upload the wheel file to pypi.org so that users could simply pip install mlmod. Compiling
Compiling MLMOD from source has quite a few steps, and while it's not overly complicated, I would not recommend it for anyone not used to compiling software with (mildly) complex dependencies. It could be made easier if there was a more detailed example script containing all the commands and some example paths. In Ubuntu 22.04, FFTW3 (libfftw3-dev), LIBLAPACKE (liblapacke-dev), and tinyxml2 (libtinyxml2-dev) are available to install as packages, leaving the user to install only libtorch (which is very simple) and the MLMOD-patched version of LAMMPS.
With the recent change in LAMMPS development choice to provide a newbie-friendly cmake compilation instructions rather than make https://docs.lammps.org/Build_cmake.html#advantages-of-using-cmake, it would be useful to provide instructions for compiling MLMOD using cmake. Furthermore, having a shell script that copies all the files MLMOD needs (src/USER-MLMOD, lib, MAKE) to a user-specified location where the LAMMPS source files are would remove several steps and sources of potential error. Finally, consider submitting the USER-MLMOD package to the LAMMPS repository https://github.com/lammps/lammps/blob/6422565048106045443f022b26bd144d23505533/.github/CONTRIBUTING.md, so that it may be included in the general release of LAMMPS. README
On the main README, you should link the sentence "For more information on other ways to install or compile the package, please see the documentation page." to the src/INSTALL.md file. Tests
I agree with @JColdstream https://github.com/JColdstream here, some tests to make sure that the package is returning expected results would be very useful. It could help identify compatibility issues with certain LAMMPS or library versions, should they exist. Software documentation
I think this needs to be created. At a minimum, there needs to be documentation on the LAMMPS commands that MLMOD introduces, and all possible parameters they accept. Ideally, these would follow the same structure that the LAMMPS documentation already has, so that the familiar structure would help users already familiar with LAMMPS. Paper
Overall well written, I have only found a few minor typos (I'm providing the lines as numbered on the PDF created by the bot above):
- accomodated line 32
- accomodate line 59
- simiplified line 93
- Nielsen:2000 line 110 (missing @ to make it a reference, I believe)
— Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/5620#issuecomment-1674799947, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACBHSZSET5GD2NFVEUAOCE3XUYX2NANCNFSM6AAAAAAZ7DOR5I . You are receiving this because you were mentioned.Message ID: @.***>
Thank you @RuiApostolo and @JColdstream for your reviews. I just wanted to give everyone a heads-up that I'll be travelling until 11th September and won't be able to look at this again until then. Once you are happy with the changes @atzberg made please let me know and I'll process the submission further when I am back in the office.
Thanks for letting me know. I hope you enjoy your travels. I'm making steady progress on items. I have an upcoming review here with a deadline of 9/15, and hoping to include this work, if everything looks good, hoping we could get things processed for JOSS before that date. Thanks again everyone for your feedback and comments. I'm going through the list, and I'll post the updated items soon.
On Wed, Aug 16, 2023 at 12:11 AM Toni Mey @.***> wrote:
Thank you @RuiApostolo https://github.com/RuiApostolo and @JColdstream https://github.com/JColdstream for your reviews. I just wanted to give everyone a heads-up that I'll be travelling until 11th September and won't be able to look at this again until then. Once you are happy with the changes @atzberg https://github.com/atzberg made please let me know and I'll process the submission further when I am back in the office.
— Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/5620#issuecomment-1680081724, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACBHSZVWMVUPRDOI754TKRDXVRXB7ANCNFSM6AAAAAAZ7DOR5I . You are receiving this because you were mentioned.Message ID: @.***>
Yes we should be able to make that happen!
Thanks! Making steady progress on the items.
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@RuiApostolo and @JColdstream
Thanks for the feedback on the package.
I wanted to let you know I just pushed updates of the mlmod git repo which should give the latest version of things. I'll write some more soon on the detailed items changed and items added in response to the comments above.
As a short summary, this includes a set of documentation pages, new scripts to simplify installation using the pre-compiled binaries, new scripts to help in compiling the package from source, and added codes for unit testing the package. The documentation pages also include discussions of installation, compilation, information on the mlmod C++ and python APIs, and on the package usage. The pre-compiled binaries are also now for lammps release Aug 2, 2023. One challenge concerned builds handling the CXX11 ABI changes for backward compatibility with manylinux for pip installs. I also made edits to the paper with the requested revisions.
I'll write some more details soon on the items. I just wanted to mention updates are now pushed to the git repo.
Thanks again.
@JColdstream (detailed responses to comments below)
"On Ubuntu 22.04: The installation worked nicely with the python package.
Installing it from source was also OK, the compilation instructions in ./src/INSTALL.md are good. There are also some (old?) > instructions in ./lib/mlmod/INSTALL.md which could be updated to be the same as ./src/INSTALL.md."
We have now created centralized documentation pages for installing the package and for compiling from the source codes. We also had to grapple with some issues with CXX11 ABI when updating for the latest release of lammps (Aug 2, 2023). We have put together both helper scripts which allow for automating much of the build process. We also put together and re-organized the Makefiles clarifying how to specify the library dependencies, paths, and other settings. For clarity and robustness, this also now includes special flags for abi and other settings. The new makfiles can be found in the $mlmod_git/src directories, and discussed on the documentation pages linked in directory $mlmod_git/doc.
We also now give in the document pages discussion of two approaches to building the package (i) using the automated scripts, and (ii) performing each of the build steps manually. Again, for the typical user, much of the compilation process is not needed, since they can use the pre-compiled binaries and 'pip install -U (mlmod-pkg).whl".
As for PyPI and pre-compiled binaries, there is a 100MB limit, so we were unable to host our wheels (.whl files) there currently, and we instead host the .whl on our own website.
We now give detailed instructions on the github page and document pages for this installation. To further help, we also put together a quick_install.py script which further automates the installation process. The script can be found in the home directory $mlmod_git/.
"The only issue was I had to edit the environment variable LD_LIBRARY_PATH to get it to run, but could just be a product of where I installed the dependencies.
"./lmp_mlmod_serial: error while loading shared libraries: libmlmod.so: cannot open shared object file: No such file or directory"
and
"./lmp_mlmod_serial: error while loading shared libraries: libtorch_cpu.so: cannot open shared object file: No such file or directory"
Was fixed by pointing to them at runtime with: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/to/mlmod/:/path/to/libtorch/lib/"
To help mitigate these types of issues with shared library locations, versions, and other issues, we used in our pre-compiled binaries an alternative approach based on r-paths and local copies of the needed libraries. When building the package manually, we have also now re-organized the makefiles and path variables to make more explicit the dependencies and locations of the shared libraries. Fortunately, for distributions such as Ubuntu, the needed libraries get installed in rather standard locations already in the search path.
For mlmod-lammps, we have now added comments to the documentation pages on the paths you mention above and discuss how to adjust the LD_LIBRARY_PATH. The new document pages can be found in the links in the directory $mlmod_git/doc.
"The example runs and gives me a nice trajectory, but have you got any checks to make sure the code is actually giving the right answer? Either something similar to pytest or something like an example calculation/dataset to compare to maybe? Just looking for something for the Automated tests section in the review."
Thanks for this feedback. We have now put together pytests for the package. For example for configurations of particles we look at the mlmod computed mobility responses and make comparisons with analytic mobility responses. This can be found in the directory $mlmod_git/tests.
"Other things: I really like the little intro video on YouTube."
Thanks, we have now included a link to this overview in the documentation pages.
"LAMMPS has a new citation that they prefer to be used (https://doi.org/10.1016/j.cpc.2021.108171) rather than the old Plimpton 1995 one."
Thanks for bringing this to my attention. We have now revised our paper with the updated citation and DOI for the new lammps paper.
In addition to the items above, we have also put together detailed documentation pages. This includes information on package installation, compiling the package and configuration, and on example usage. The documentation also discusses some of the C++ API and related python interface to the package via lammps. We also revised the examples, which now simplify the process to generate ml models and the process in setting up simulations.
Thanks again for this feedback on the package.
@RuiApostolo (detailed responses to comments below)
I'll try to organise my recommendations in relevant sections
Installation
pip
The installation using pip is really simple, but it contains an older version of LAMMPS (29 Oct 2020). If possible, it would be preferable to upload the wheel file to pypi.org so that users could simply
pip install mlmod
.
We have now packaged pre-compiled binaries for the latest release of lammps (Aug 2, 2023). This presented some challenges to build given the CXX11 ABI and aim to be backward compatible with manylinux distributions. We have also simplified the installation process. This includes an automated script quick_install.py available on the homepage $mlmod_git/ and step-by-step instructions on the homepage and documentation pages. As for PyPI, we are running into the 100MB for direct distribution there, so we are hosting the pip wheels .whl on our website. Our new simplified installation process now only requires a single command like: pip (package-url-name)
.
Compiling
Compiling MLMOD from source has quite a few steps, and while it's not overly complicated, I would not recommend it for anyone not used to compiling software with (mildly) complex dependencies. It could be made easier if there was a more detailed example script containing all the commands and some example paths. In Ubuntu 22.04, FFTW3 (libfftw3-dev), LIBLAPACKE (liblapacke-dev), and tinyxml2 (libtinyxml2-dev) are available to install as packages, leaving the user to install only libtorch (which is very simple) and the MLMOD-patched version of LAMMPS.
We have now re-organized the build process simplifying the process. We also put together scripts to help automate many of the build steps. This includes copying the files to needed locations in lammps and calling the needed makefiles for the library and overall build. We also put together detailed documentation pages discussing including two approaches to building the package (i) using our automatic scripts, and (ii) manually performing the steps of the build. Our re-organized makefiles and documentation pages also clarify how the build relates to dependent libraries and how these can be adjusted. We also added flags for more robustly handling different abis and system settings. The build process for compiling from source is discussed in the documentation pages in directory $mlmod_git/doc
With the recent change in LAMMPS development choice to provide a newbie-friendly
cmake
compilation instructions rather than make, it would be useful to provide instructions for compiling MLMOD usingcmake
. Furthermore, having a shell script that copies all the files MLMOD needs (src/USER-MLMOD, lib, MAKE) to a user-specified location where the LAMMPS source files are would remove several steps and sources of potential error.
Thanks for this suggestion to further abstract and automate the build process with cmake. There are a number of subtle dependencies in lammps and configuration scripts we currently utilize for the package during builds. We will look into abstracting this further to utilize the cmake paradigm to generate the Makefiles and run the build process in future releases.
Finally, consider submitting the USER-MLMOD package to the LAMMPS repository, so that it may be included in the general release of LAMMPS.
This is also something we plan to look into. Thanks for mentioning.
README
On the main README, you should link the sentence "For more information on other ways to install or compile the package, please see the documentation page." to the
src/INSTALL.md
file.
We have now put together centralized documentation pages. This information is now located there with details on the installation process, both using the pre-compiled binaries and for building the package from source codes. We also put together scripts for helping to automate some steps of installation and the build process.
Tests
I agree with @JColdstream here, some tests to make sure that the package is returning expected results would be very useful. It could help identify compatibility issues with certain LAMMPS or library versions, should they exist.
We have now put together pytests for the codes. For example for configurations of particles we look at the mlmod computed mobility responses and make comparisons with analytic mobility responses. This can be found in the directory $mlmod_git/tests.
Software documentation
I think this needs to be created. At a minimum, there needs to be documentation on the LAMMPS commands that MLMOD introduces, and all possible parameters they accept. Ideally, these would follow the same structure that the LAMMPS documentation already has, so that the familiar structure would help users already familiar with LAMMPS.
We have now put together documentation pages in style similar to lammps. We now discuss also usage of the mlmod package, which is cast in the style of a "lammps fix." We also give some discussion of the C++ APIs and the python interfaces to the mlmod-lammps package. These can be found in the directory $mlmod_git/doc
Paper
Overall well written, I have only found a few minor typos (I'm providing the lines as numbered on the PDF created by the bot above):
accomodated
line 32accomodate
line 59simiplified
line 93Nielsen:2000
line 110 (missing@
to make it a reference, I believe)
Thanks. We have now revised the paper taking the comments above into account. We have also resolved the DOI related references.
In addition to the items above, we have also revised the examples, which now simplify the process to generate ml models and the process in setting up simulations. We also put together scripts for automating many of the steps for installation and for building the package.
Thanks again for this feedback on the package.
@atzberg Incredible work on the changes, I think the new scripts for installing and compiling will make things much easier for anyone wanting to use MLMOD. I didn't know about the 100MB limit on pypi, your solution makes sense.
@ppxasjsm From my end, this looks good, I recommend acceptance.
@atzberg Good stuff! Compilation is much smoother now and the tests are simple and easy to use. The new documentation also looks good.
@ppxasjsm Looks good to go.
Thanks everyone for the comments and approving the package!
@ppxasjsm Hope your travels are going well. When you are back in the office, and get the chance, just let me know the next step (or if any items to handle in the meantime).
Hoping we can wrap things up before 9/15, which is a cut-off date here on an upcoming review.
Thanks again!
@editorialbot set <DOI here> as archive
@editorialbot set <version here> as version
@editorialbot generate pdf
@editorialbot check references
and ask author(s) to update as needed@editorialbot recommend-accept
Hi @atzberg can you please complete this checklist:
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1126/science.1127647 is OK
- 10.1073/pnas.1517384113 is OK
- 10.1126/science.1165893 is OK
- 10.1016/j.physd.2013.09.002 is OK
- 10.1016/S0038-1098(96)00718-1 is OK
- 10.1143/JPSJ.78.041005 is OK
- 10.1017/CBO9781139167864 is OK
- 10.1002/jcc.540040211 is OK
- 10.2172/10176421 is OK
- 10.1038/nature14956 is OK
- 10.1557/mrs.2011.30 is OK
- 10.1016/j.jsb.2006.10.023 is OK
- 10.1098/rsta.2008.0219 is OK
- 10.1038/s43588-021-00034-x is OK
- 10.1007/978-3-319-01905-5_60-2 is OK
- 10.1088/1361-651x/ab7150 is OK
- 10.1007/978-3-540-28650-9_4 is OK
- 10.1007/978-0-387-84858-7 is OK
- 10.1016/0079-6816(93)90013-l is OK
- 10.1093/acprof:oso/9780199652952.001.0001 is OK
- 10.1021/acsomega.8b01393 is OK
- 10.5281/zenodo.7897824 is OK
- 10.1080/01621459.2017.1285773 is OK
- 10.1016/S1367-5788(00)90017-8 is OK
- 10.1016/j.cpc.2021.108171 is OK
- 10.1007/s002050050183 is OK
MISSING DOIs
- 10.2139/ssrn.4455789 may be a valid DOI for title: SDYN-GANs: Adversarial Learning Methods for Multistep Generative Models for General Order Stochastic Dynamics
INVALID DOIs
- None
Reference check summary (note 'MISSING' DOIs are suggestions that need verification): OK DOIs - 10.1126/science.1127647 is OK - 10.1073/pnas.1517384113 is OK - 10.1126/science.1165893 is OK - 10.1016/j.physd.2013.09.002 is OK - 10.1016/S0038-1098(96)00718-1 is OK - 10.1143/JPSJ.78.041005 is OK - 10.1017/CBO9781139167864 is OK - 10.1002/jcc.540040211 is OK - 10.2172/10176421 is OK - 10.1038/nature14956 is OK - 10.1557/mrs.2011.30 is OK - 10.1016/j.jsb.2006.10.023 is OK - 10.1098/rsta.2008.0219 is OK - 10.1038/s43588-021-00034-x is OK - 10.1007/978-3-319-01905-5_60-2 is OK - 10.1088/1361-651x/ab7150 is OK - 10.1007/978-3-540-28650-9_4 is OK - 10.1007/978-0-387-84858-7 is OK - 10.1016/0079-6816(93)90013-l is OK - 10.1093/acprof:oso/9780199652952.001.0001 is OK - 10.1021/acsomega.8b01393 is OK - 10.5281/zenodo.7897824 is OK - 10.1080/01621459.2017.1285773 is OK - 10.1016/S1367-5788(00)90017-8 is OK - 10.1016/j.cpc.2021.108171 is OK - 10.1007/s002050050183 is OK MISSING DOIs - 10.2139/ssrn.4455789 may be a valid DOI for title: SDYN-GANs: Adversarial Learning Methods for Multistep Generative Models for General Order Stochastic Dynamics
The auto-generated suggestion above in incorrect, this is a preprint posted on the arXiv. I have already included the correct URL to that paper.
INVALID DOIs
- None
All the DOI references in the paper are now in place.
Thank you! I'll get back to you with any additional changes/suggestions in the next couple of days if needed. I am aware of the tight timeline.
Thank you! I'll also get the other items on the author checklist done soon. I really appreciate it!
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Hi @atzberg can you please complete this checklist:
[x] Double check authors and affiliations (including ORCIDs)
[x] Make a release of the software with the latest changes from the review and post the version number here. This is the version that will be used in the JOSS paper.
Release for package version 1.0.1 has been posted as tag v1.0.1
- [x] Archive the release on Zenodo/figshare/etc and post the DOI here.
DOI: 10.5281/zenodo.8327516
URL: https://doi.org/10.5281/zenodo.8327516
[x] Make sure that the title and author list (including ORCIDs) in the archive match those in the JOSS paper.
[x] Make sure that the license listed for the archive is the same as the software license.
I have updated the GitHub with these items. Just let me know if you need anything else.
The checklist has now been completed.
Thanks everyone for the comments and approving the package! @ppxasjsm When you are back in the office, and get the chance, just let me know the next step.
Thanks again!
On Wed, Sep 6, 2023 at 12:10 PM Toni Mey @.***> wrote:
Thank you! I'll get back to you with any additional changes/suggestions in the next couple of days if needed. I am aware of the tight timeline.
— Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/5620#issuecomment-1708942032, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACBHSZS6FBTZP6FXJAUD3PLXZDDAVANCNFSM6AAAAAAZ7DOR5I . You are receiving this because you were mentioned.Message ID: @.***>
I hope your travels went well. I imagine catching up with things after the return to the office. Also, noticed we are shifted +-8 hrs for time-zones. @ppxasjsm Just let me know if anything else I should do here. Looking forward to communicating this work soon.
Thanks again!
@editorialbot set 10.5281/zenodo.8327516 as archive
Done! archive is now 10.5281/zenodo.8327516
@editorialbot set 1.0.1 as version
Done! version is now 1.0.1
@editorialbot generate pdf
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1126/science.1127647 is OK
- 10.1073/pnas.1517384113 is OK
- 10.1126/science.1165893 is OK
- 10.1016/j.physd.2013.09.002 is OK
- 10.1016/S0038-1098(96)00718-1 is OK
- 10.1143/JPSJ.78.041005 is OK
- 10.1017/CBO9781139167864 is OK
- 10.1002/jcc.540040211 is OK
- 10.2172/10176421 is OK
- 10.1038/nature14956 is OK
- 10.1557/mrs.2011.30 is OK
- 10.1016/j.jsb.2006.10.023 is OK
- 10.1098/rsta.2008.0219 is OK
- 10.1038/s43588-021-00034-x is OK
- 10.1007/978-3-319-01905-5_60-2 is OK
- 10.1088/1361-651x/ab7150 is OK
- 10.1007/978-3-540-28650-9_4 is OK
- 10.1007/978-0-387-84858-7 is OK
- 10.1016/0079-6816(93)90013-l is OK
- 10.1093/acprof:oso/9780199652952.001.0001 is OK
- 10.1021/acsomega.8b01393 is OK
- 10.5281/zenodo.8327516 is OK
- 10.1080/01621459.2017.1285773 is OK
- 10.1016/S1367-5788(00)90017-8 is OK
- 10.1016/j.cpc.2021.108171 is OK
- 10.1007/s002050050183 is OK
MISSING DOIs
- 10.2139/ssrn.4455789 may be a valid DOI for title: SDYN-GANs: Adversarial Learning Methods for Multistep Generative Models for General Order Stochastic Dynamics
INVALID DOIs
- None
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Submitting author: !--author-handle-->@atzberg<!--end-author-handle-- (Paul J. Atzberger) Repository: https://github.com/atzberg/mlmod Branch with paper.md (empty if default branch): main Version: 1.0.1 Editor: !--editor-->@ppxasjsm<!--end-editor-- Reviewers: @jcoldstream, @RuiApostolo Archive: 10.5281/zenodo.8327516
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@jcoldstream & @RuiApostolo, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @ppxasjsm know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @RuiApostolo
📝 Checklist for @JColdstream