openjournals / joss-reviews

Reviews for the Journal of Open Source Software
Creative Commons Zero v1.0 Universal
703 stars 36 forks source link

[REVIEW]: PyLogGrid: A Python package for fluid dynamics on logarithmic lattices #6439

Open editorialbot opened 6 months ago

editorialbot commented 6 months ago

Submitting author: !--author-handle-->@hippalectryon-0<!--end-author-handle-- (Amaury Barral) Repository: https://github.com/hippalectryon-0/pyloggrid Branch with paper.md (empty if default branch): joss Version: 2.3.0 Editor: !--editor-->@philipcardiff<!--end-editor-- Reviewers: @slaizet, @marlonsmathias, @weiwangstfc Archive: Pending

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/002028861bf61a4a81b3edde40db3d63"><img src="https://joss.theoj.org/papers/002028861bf61a4a81b3edde40db3d63/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/002028861bf61a4a81b3edde40db3d63/status.svg)](https://joss.theoj.org/papers/002028861bf61a4a81b3edde40db3d63)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@slaizet & @marlonsmathias, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @philipcardiff know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @marlonsmathias

📝 Checklist for @slaizet

editorialbot commented 6 months ago

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf
editorialbot commented 6 months ago

Software report:

github.com/AlDanial/cloc v 1.90  T=0.31 s (258.0 files/s, 29135.1 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          30           1136           1093           3533
Markdown                         9            159              0            396
reStructuredText                16            347            207            354
Jupyter Notebook                 1              0            344            320
YAML                             5             24             30            242
Cython                           1             22              9            218
C                                1             26              0            169
TeX                              1             13              0             96
PowerShell                       2             17             13             81
Bourne Shell                     5             17             15             70
TOML                             1              9              1             69
SVG                              5              1              1             31
make                             2              9              9             29
DOS Batch                        2              8              1             28
-------------------------------------------------------------------------------
SUM:                            81           1788           1723           5636
-------------------------------------------------------------------------------

Commit count by author:

    29  Hippalectryon
editorialbot commented 6 months ago

Paper file info:

📄 Wordcount for paper.md is 499

✅ The paper includes a Statement of need section

editorialbot commented 6 months ago

License info:

🟡 License found: GNU General Public License v3.0 (Check here for OSI approval)

editorialbot commented 6 months ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1146/annurev.fluid.35.101101.161122 is OK
- 10.1103/PhysRevLett.77.5369 is OK
- 10.48550/arXiv.2006.00047 is OK
- 10.1017/jfm.2023.204 is OK
- 10.3390/atmos14111690 is OK

MISSING DOIs

- 10.1103/physreve.107.065106 may be a valid DOI for title: Reversible Navier-Stokes equation on logarithmic lattices
- 10.1016/j.jcp.2014.09.038 may be a valid DOI for title: Exponential time-differencing with embedded Runge–Kutta adaptive step control

INVALID DOIs

- https://doi.org/10.1016/0167-2789(85)90002-8 is INVALID because of 'https://doi.org/' prefix
editorialbot commented 6 months ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

marlonsmathias commented 6 months ago

Review checklist for @marlonsmathias

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

philipcardiff commented 6 months ago

@editorialbot add @weiwangstfc as reviewer

editorialbot commented 6 months ago

@weiwangstfc added to the reviewers list!

marlonsmathias commented 5 months ago

@philipcardiff I have found some issues in the installation process, but I'm unsure how to proceed. Do I report them in this issue or do I open a new issue?

philipcardiff commented 5 months ago

@philipcardiff I have found some issues in the installation process, but I'm unsure how to proceed. Do I report them in this issue or do I open a new issue?

Hi @marlonsmathias, please create a new issue in the main repository at https://github.com/hippalectryon-0/pyloggrid. Thanks.

slaizet commented 5 months ago

Review checklist for @slaizet

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

slaizet commented 5 months ago

Few comments about this submission:

Contribution and authorship: It seems than only hippalectryon-0 & logseq7546515162 have contributed to Github. It would be good to clarify the role of the other authors in the paper.

Data sharing & Reproducibility: It would be good to provide data for comparison for various examples. It will help with reproducibility.

Performance: Performance data are missing (more than just Windows versus Linux). This can be done here https://pyloggrid.readthedocs.io/en/latest/documentation/doc_performance.html or in the paper.

Example usage: More examples would be great.

Automated tests: Missing.

State of the field: Missing in the paper.

hippalectryon-0 commented 4 months ago

@philipcardiff I'm not sure what the procedure is in JOSS now that we have two reviews. Should I reply to the reviews directly here and make modification to check all the boxes ? Should I wait for a signal on your end ? Thanks !

philipcardiff commented 4 months ago

@philipcardiff I'm not sure what the procedure is in JOSS now that we have two reviews. Should I reply to the reviews directly here and make modification to check all the boxes ? Should I wait for a signal on your end ? Thanks !

Hi @hippalectryon-0, the JOSS review process is (should be) interactive. You iteratively address review comments as they come up; there is no need to wait for the reviewers to finish their reviews. So, please address or respond to any issues that have been raised here or in the issues section of your repository. If anything is unclear, feel free to ask the reviewer to clarify (or for my interpretation). Once both reviewers state that they are happy that everything is in order (and they have completed their checkbox list) then the review is complete.

philipcardiff commented 3 months ago

Hi @hippalectryon-0, let me know if any of the reviewers' comments are unclear to you.

hippalectryon-0 commented 3 months ago

Sorry for the delay, I've been more busy than expected - I'll answer the comments shortly !

hippalectryon-0 commented 2 months ago

@marlonsmathias Thanks for your review !

hippalectryon-0 commented 2 months ago

@slaizet Thanks for your review !

Regarding the points you have deemed lacking:

Best, A.B.

slaizet commented 2 months ago
  • "Contribution and authorship": could you indicate what is missing ? The paper has 5 authors, I cannot see the contributions from all the authors (for instance, it seems than only hippalectryon-0 & logseq7546515162 have contributed to Github).

  • "Performance": same question Performance data are missing (more than just Windows versus Linux). This can be done here https://pyloggrid.readthedocs.io/en/latest/documentation/doc_performance.html or in the paper.

  • "Example usage": could you indicate what is missing in the example that is provided in the documentation ? More than one example would be great.

  • "Automated tests": I believe there are many automated tests in https://github.com/hippalectryon-0/pyloggrid/tree/joss/log-grid/tests Are those tests automated? How often are they performed? Do they go through the different parts of the code?

  • "State of the field": Given that there are no other packages to make simulations on shell models other than the Matlab package by Campolina&Al that is mentioned in the doc, could you expand on what is missing ? Shell models are not my area of expertise but I would expect that simulations of shell models have been performed in the past (which seems to be the case by a quick look at Google Scholar) and it would be nice to briefly discuss how those simulations were performed and what your solver can provide in terms of capability.

hippalectryon-0 commented 2 months ago

@slaizet Thank you for your answer.

The paper has 5 authors, I cannot see the contributions from all the authors (for instance, it seems than only hippalectryon-0 & logseq7546515162 have contributed to Github).

Since the Github repository is a mirror of the private CEA Gitlab repository, indeed the direct contribution of each author is not clearly visible. As indicated in the original submission, "The authors contributed to this work in unequal proportions, with Amaury Barral taking the lead in the majority of the research, while the remaining authors made valuable but comparatively minor contributions."

If this is still not precise enough, let me know precisely what you would like to know.

Performance data are missing (more than just Windows versus Linux). This can be done here https://pyloggrid.readthedocs.io/en/latest/documentation/doc_performance.html or in the paper.

What kind of performance data do you feel is lacking precisely ? I believe that the page you mention explains all the existing tools to perform your own benchmark on your machine. Performance can greatly vary depending on CPU architecture, compiler version, etc.

More than one example would be great.

I agree with you on that point. The reason why I did not e.g. provide a Rayleigh-Bénard example is because such an example also requires a fait bit of mathematical explanation regarding for example why log-lattice equations are different from usual RB equations (which is explained in our research paper), and I felt that overall that would be too convoluted for the documentation. Let me know if you really feel that this is a blocking point for the acceptation of the paper.

Are those tests automated? How often are they performed? Do they go through the different parts of the code?

The tests are 100% automated (see e.g. https://github.com/hippalectryon-0/pyloggrid/blob/joss/log-grid/.gitlab-ci.yml). They're performed on each commit and merge request. The test coverage is 85%, with all major parts 100% covered.

Shell models are not my area of expertise but I would expect that simulations of shell models have been performed in the past (which seems to be the case by a quick look at Google Scholar) and it would be nice to briefly discuss how those simulations were performed and what your solver can provide in terms of capability.

Shell model simulations have indeed been performed in the past, but: 1) there's no widely used numerical library (that I know of) for shell models, as shell models are easy to re-implement from scratch for each paper 2) although log-lattices share some similarities with shell models, they're also fundamentally different. Therefore I can't think of a relevant package to compare this one to.

slaizet commented 2 months ago

Thank you for your reply.

I do not know if your answer regarding the contributions of the authors and having a single example of use of the software is satisfactory for the journal (@philipcardiff should be able to answer).

In terms of performance, it would be great to see some scalability plots, memory usage, etc on various hardware.

Finally, I still think that a small literature review on shell model simulations would be helpful for the readers and for potential users of your software.

philipcardiff commented 2 months ago

Thank you for your reply.

I do not know if your answer regarding the contributions of the authors and having a single example of use of the software is satisfactory for the journal (@philipcardiff should be able to answer).

In terms of performance, it would be great to see some scalability plots, memory usage, etc on various hardware.

Finally, I still think that a small literature review on shell model simulations would be helpful for the readers and for potential users of your software.

Regarding authorship, the JOSS documentation states (note that authoring a commit is not a requirement):

Purely financial (such as being named on an award) and organizational (such as general supervision of a research group) contributions are not considered sufficient for co-authorship of JOSS submissions, but active project direction and other forms of non-code contributions are. The authors themselves assume responsibility for deciding who should be credited with co-authorship, and co-authors must always agree to be listed. In addition, co-authors agree to be accountable for all aspects of the work, and to notify JOSS if any retraction or correction of mistakes are needed after publication.

@hippalectryon-0: based on your comments, I believe all five authors satisfy these requirements. Is this correct?

Regarding the number of example cases, the JOSS documentation states:

The authors should include examples of how to use the software (ideally to solve real-world analysis problems).

Although possibly ambiguous, note the use of the word "examples" (plural).

Also, the JOSS documentation states

Reviewers should rely on their expert understanding of their domain to judge whether the software is of broad interest (likely to be cited by other researchers) or more narrowly focused around the needs of an individual researcher or lab.

Based on these two points, it seems reasonable that more than one test case should be included in the documentation.

marlonsmathias commented 2 months ago

@marlonsmathias Thanks for your review !

  • I see you've left the "performance" checkbox unchecked. Could you indicate more precisely which part is lacking ?

Regarding the performance, it is now solved, I had left it unchecked because I could not find the data in the version of the documentation I was looking at (v:latest).

  • Likewise, what more in "Functionality documentation" would you have liked ?

For the functionality documentation, I am missing some information on the case setup and run in the tutorial. I would like to see more details on how to set up, for instance, the boundary conditions. One simple case for this is the lid-driven cavity, but others could also be used.