Open editorialbot opened 8 months ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.90 T=0.31 s (258.0 files/s, 29135.1 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Python 30 1136 1093 3533
Markdown 9 159 0 396
reStructuredText 16 347 207 354
Jupyter Notebook 1 0 344 320
YAML 5 24 30 242
Cython 1 22 9 218
C 1 26 0 169
TeX 1 13 0 96
PowerShell 2 17 13 81
Bourne Shell 5 17 15 70
TOML 1 9 1 69
SVG 5 1 1 31
make 2 9 9 29
DOS Batch 2 8 1 28
-------------------------------------------------------------------------------
SUM: 81 1788 1723 5636
-------------------------------------------------------------------------------
Commit count by author:
29 Hippalectryon
Paper file info:
📄 Wordcount for paper.md
is 499
✅ The paper includes a Statement of need
section
License info:
🟡 License found: GNU General Public License v3.0
(Check here for OSI approval)
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1146/annurev.fluid.35.101101.161122 is OK
- 10.1103/PhysRevLett.77.5369 is OK
- 10.48550/arXiv.2006.00047 is OK
- 10.1017/jfm.2023.204 is OK
- 10.3390/atmos14111690 is OK
MISSING DOIs
- 10.1103/physreve.107.065106 may be a valid DOI for title: Reversible Navier-Stokes equation on logarithmic lattices
- 10.1016/j.jcp.2014.09.038 may be a valid DOI for title: Exponential time-differencing with embedded Runge–Kutta adaptive step control
INVALID DOIs
- https://doi.org/10.1016/0167-2789(85)90002-8 is INVALID because of 'https://doi.org/' prefix
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot add @weiwangstfc as reviewer
@weiwangstfc added to the reviewers list!
@philipcardiff I have found some issues in the installation process, but I'm unsure how to proceed. Do I report them in this issue or do I open a new issue?
@philipcardiff I have found some issues in the installation process, but I'm unsure how to proceed. Do I report them in this issue or do I open a new issue?
Hi @marlonsmathias, please create a new issue in the main repository at https://github.com/hippalectryon-0/pyloggrid. Thanks.
Few comments about this submission:
Contribution and authorship: It seems than only hippalectryon-0 & logseq7546515162 have contributed to Github. It would be good to clarify the role of the other authors in the paper.
Data sharing & Reproducibility: It would be good to provide data for comparison for various examples. It will help with reproducibility.
Performance: Performance data are missing (more than just Windows versus Linux). This can be done here https://pyloggrid.readthedocs.io/en/latest/documentation/doc_performance.html or in the paper.
Example usage: More examples would be great.
Automated tests: Missing.
State of the field: Missing in the paper.
@philipcardiff I'm not sure what the procedure is in JOSS now that we have two reviews. Should I reply to the reviews directly here and make modification to check all the boxes ? Should I wait for a signal on your end ? Thanks !
@philipcardiff I'm not sure what the procedure is in JOSS now that we have two reviews. Should I reply to the reviews directly here and make modification to check all the boxes ? Should I wait for a signal on your end ? Thanks !
Hi @hippalectryon-0, the JOSS review process is (should be) interactive. You iteratively address review comments as they come up; there is no need to wait for the reviewers to finish their reviews. So, please address or respond to any issues that have been raised here or in the issues section of your repository. If anything is unclear, feel free to ask the reviewer to clarify (or for my interpretation). Once both reviewers state that they are happy that everything is in order (and they have completed their checkbox list) then the review is complete.
Hi @hippalectryon-0, let me know if any of the reviewers' comments are unclear to you.
Sorry for the delay, I've been more busy than expected - I'll answer the comments shortly !
@marlonsmathias Thanks for your review !
@slaizet Thanks for your review !
Regarding the points you have deemed lacking:
Best, A.B.
"Contribution and authorship": could you indicate what is missing ? The paper has 5 authors, I cannot see the contributions from all the authors (for instance, it seems than only hippalectryon-0 & logseq7546515162 have contributed to Github).
"Performance": same question Performance data are missing (more than just Windows versus Linux). This can be done here https://pyloggrid.readthedocs.io/en/latest/documentation/doc_performance.html or in the paper.
"Example usage": could you indicate what is missing in the example that is provided in the documentation ? More than one example would be great.
"Automated tests": I believe there are many automated tests in https://github.com/hippalectryon-0/pyloggrid/tree/joss/log-grid/tests Are those tests automated? How often are they performed? Do they go through the different parts of the code?
"State of the field": Given that there are no other packages to make simulations on shell models other than the Matlab package by Campolina&Al that is mentioned in the doc, could you expand on what is missing ? Shell models are not my area of expertise but I would expect that simulations of shell models have been performed in the past (which seems to be the case by a quick look at Google Scholar) and it would be nice to briefly discuss how those simulations were performed and what your solver can provide in terms of capability.
@slaizet Thank you for your answer.
The paper has 5 authors, I cannot see the contributions from all the authors (for instance, it seems than only hippalectryon-0 & logseq7546515162 have contributed to Github).
Since the Github repository is a mirror of the private CEA Gitlab repository, indeed the direct contribution of each author is not clearly visible. As indicated in the original submission, "The authors contributed to this work in unequal proportions, with Amaury Barral taking the lead in the majority of the research, while the remaining authors made valuable but comparatively minor contributions."
If this is still not precise enough, let me know precisely what you would like to know.
Performance data are missing (more than just Windows versus Linux). This can be done here https://pyloggrid.readthedocs.io/en/latest/documentation/doc_performance.html or in the paper.
What kind of performance data do you feel is lacking precisely ? I believe that the page you mention explains all the existing tools to perform your own benchmark on your machine. Performance can greatly vary depending on CPU architecture, compiler version, etc.
More than one example would be great.
I agree with you on that point. The reason why I did not e.g. provide a Rayleigh-Bénard example is because such an example also requires a fait bit of mathematical explanation regarding for example why log-lattice equations are different from usual RB equations (which is explained in our research paper), and I felt that overall that would be too convoluted for the documentation. Let me know if you really feel that this is a blocking point for the acceptation of the paper.
Are those tests automated? How often are they performed? Do they go through the different parts of the code?
The tests are 100% automated (see e.g. https://github.com/hippalectryon-0/pyloggrid/blob/joss/log-grid/.gitlab-ci.yml). They're performed on each commit and merge request. The test coverage is 85%, with all major parts 100% covered.
Shell models are not my area of expertise but I would expect that simulations of shell models have been performed in the past (which seems to be the case by a quick look at Google Scholar) and it would be nice to briefly discuss how those simulations were performed and what your solver can provide in terms of capability.
Shell model simulations have indeed been performed in the past, but: 1) there's no widely used numerical library (that I know of) for shell models, as shell models are easy to re-implement from scratch for each paper 2) although log-lattices share some similarities with shell models, they're also fundamentally different. Therefore I can't think of a relevant package to compare this one to.
Thank you for your reply.
I do not know if your answer regarding the contributions of the authors and having a single example of use of the software is satisfactory for the journal (@philipcardiff should be able to answer).
In terms of performance, it would be great to see some scalability plots, memory usage, etc on various hardware.
Finally, I still think that a small literature review on shell model simulations would be helpful for the readers and for potential users of your software.
Thank you for your reply.
I do not know if your answer regarding the contributions of the authors and having a single example of use of the software is satisfactory for the journal (@philipcardiff should be able to answer).
In terms of performance, it would be great to see some scalability plots, memory usage, etc on various hardware.
Finally, I still think that a small literature review on shell model simulations would be helpful for the readers and for potential users of your software.
Regarding authorship, the JOSS documentation states (note that authoring a commit is not a requirement):
Purely financial (such as being named on an award) and organizational (such as general supervision of a research group) contributions are not considered sufficient for co-authorship of JOSS submissions, but active project direction and other forms of non-code contributions are. The authors themselves assume responsibility for deciding who should be credited with co-authorship, and co-authors must always agree to be listed. In addition, co-authors agree to be accountable for all aspects of the work, and to notify JOSS if any retraction or correction of mistakes are needed after publication.
@hippalectryon-0: based on your comments, I believe all five authors satisfy these requirements. Is this correct?
Regarding the number of example cases, the JOSS documentation states:
The authors should include examples of how to use the software (ideally to solve real-world analysis problems).
Although possibly ambiguous, note the use of the word "examples" (plural).
Also, the JOSS documentation states
Reviewers should rely on their expert understanding of their domain to judge whether the software is of broad interest (likely to be cited by other researchers) or more narrowly focused around the needs of an individual researcher or lab.
Based on these two points, it seems reasonable that more than one test case should be included in the documentation.
@marlonsmathias Thanks for your review !
- I see you've left the "performance" checkbox unchecked. Could you indicate more precisely which part is lacking ?
Regarding the performance, it is now solved, I had left it unchecked because I could not find the data in the version of the documentation I was looking at (v:latest).
- Likewise, what more in "Functionality documentation" would you have liked ?
For the functionality documentation, I am missing some information on the case setup and run in the tutorial. I would like to see more details on how to set up, for instance, the boundary conditions. One simple case for this is the lid-driven cavity, but others could also be used.
Apologies for the late answer, it is always difficult to find proper time to wrap up this paper. This was a software I developed for my PhD, but as I am now working somewhere else, I hardly have much time to work on this. Nevertheless, seeing as this has dragged on for quite some time, I would like to wrap up and publish this piece of software before the end of the year if possible.
Looking back at the comments provided above, the only missing point is an additional example / test case in the documentation. I will add a Homogeneous Rayleigh-Bénard case shortly.
To answer some questions asked above: @philipcardiff
@hippalectryon-0: based on your comments, I believe all five authors satisfy these requirements. Is this correct
This is correct.
@slaizet
In terms of performance, it would be great to see some scalability plots, memory usage, etc on various hardware.
Although I understand that more stats may be useful, I believe that the current stats at https://pyloggrid.readthedocs.io/en/joss/benchmark.html are already sufficient, and moreover can easily be ran by users on their own hardware.
@marlonsmathias
For the functionality documentation, I am missing some information on the case setup and run in the tutorial. I would like to see more details on how to set up, for instance, the boundary conditions.
I understand that this may seem nonintuitive for users of standard fluid simulation software, but in our case we do not prescribe any boundary conditions, as we are always in Fourier space, which implies periodic boundary conditions. This is explained (shortly) in https://pyloggrid.readthedocs.io/en/joss/tutorial/tutorial_first_simu.html.
@slaizet @marlonsmathias @philipcardiff I have added the new example in the docs here. Please let me know if this is now satisfactory, and if there's any further point preventing the end of the review.
Hi all, I am happy to go ahead with the publication for this software. Cheers, Sylvain
Hello all,
I also agree with the publication as it is now. Thank you for the updates!
Best, Marlon
On Thu, Nov 14, 2024 at 10:39 AM slaizet @.***> wrote:
Hi all, I am happy to go ahead with the publication for this software. Cheers, Sylvain
— Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/6439#issuecomment-2476383294, or unsubscribe https://github.com/notifications/unsubscribe-auth/ATL6SOCKPIWM3TV6EREQWI32ASR2XAVCNFSM6AAAAABEE52JZWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINZWGM4DGMRZGQ . You are receiving this because you were mentioned.Message ID: @.***>
Submitting author: !--author-handle-->@hippalectryon-0<!--end-author-handle-- (Amaury Barral) Repository: https://github.com/hippalectryon-0/pyloggrid Branch with paper.md (empty if default branch): joss Version: 2.3.0 Editor: !--editor-->@philipcardiff<!--end-editor-- Reviewers: @slaizet, @marlonsmathias, @weiwangstfc Archive: Pending
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@slaizet & @marlonsmathias, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @philipcardiff know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @marlonsmathias
📝 Checklist for @slaizet