Closed whedon closed 5 years ago
Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @iljah, it looks like you're currently assigned as the reviewer for this paper :tada:.
:star: Important :star:
If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿
To fix this do the following two things:
For a list of things I can do to help you, just type:
@whedon commands
Attempting PDF compilation. Reticulating splines etc...
@iljah @markadams :wave: Welcome and thanks for agreeing to review! The comments from @whedon above outline the review process, which takes place in this thread (possibly with issues filed in the ppiclF repository. I'll be watching this thread if you have any questions.
I think having the data lurk in common blocks will limit the extensibility of this project. I would suggest a more object oriented approach like provide a particle list object and iterators of some sort, and methods that operate on these lists. For instance, using multiple species of particles does not look feasible now. I don't know what the long term goal of this project is but it is young and I am suggesting a major refactoring -- the longer you wait the harder it will be to do.
Thank you, that is a good suggestion. Indeed I think we may head that direction in the future. Either by upgrading to Fortran 90 or possibly even a while C++ overhaul.
While this would be helpful, it shouldn't hinder the current capabilities of ppiclF. Multiple species can be done already since the user can solve whichever equations they want and also store arbitrary numbers of properties in their simulation.
I will keep this in mind for future versions though.
Documentation (README.md as far as I can tell) could use a bit longer statement of need and is missing installation instructions and community guidelines.
According to my instructions there should be API documentation but I don't see any in the repository. Where can I learn e.g. how ppiclc_solve_InitBoxFilter
works?
Readme also doesn't say anything about testing the library.
Added some installation issues to library repo:
Thank you for the feedback @iljah. I think issues dpzwick/ppiclF#25, dpzwick/ppiclF#26, and dpzwick/ppiclF#28 are a simple misunderstanding of not being able to see the documentation website.
While the link to the website was in the compiled JOSS pdf, I made it more clear in the README.md file along with your previous suggestions to that file (see this commit).
As for issue dpzwick/ppiclF#27, I agree. It was confusing that even when a different compiler is specified gslib would compile with a hard-coded mpicc. I updated the source so gslib will use the same compiler as the main ppiclF Makefile. Although I also note that the only supported compilers currently are mpicc, mpif77 and mpif90.
Also, to address your question above about where to find information about items such as: ppiclc_solve_InitBoxFilter, there is extensive information on the documentation website here.
Documentation on external website should be ok now that there's also a link to it in readme but it's also missing a statement of need or at least the target audience.
About performance: is there are particular reason why ghost particle send doesn't seem to scale past 1000 ranks, or at all? Also how many particles / s does ppiclF process on one core?
A link to vulcan page at LLNL in performance section would be nice.
Thank you for the feedback. I updated the readme to include a target audience in this commit and this commit.
Regarding the ghost particles, the reason for this is the underlying all-to-all communication. At best all-to-all communication can be log_2 (R) where R is number of MPI ranks unless some very specific refactoring of communication patterns is done. In the weak scaling, the particles per rank is fixed so it is expected to see this log scaling.
Regarding the number of particles on a single core, the number is only limited by the available memory per core. In the performance section it was varied between 256 - 24,576 when the strong scaling was done, although it could have been higher if needed.
I also added a link to Vulcan on the website in this commit so it is updated now.
Thanks, I was asking about solved particles per second per core not number of particles.
What fraction of CPU time was used by the fluid solver and was that included in timings provided in performance section? I can imagine with only 300 particles per process the fluid solution would take much more time than particles which would improve particle scaling since time spent transferring particle data would could be masked by fluid computation.
In the scaling cases the fluid was not timed. Since Nek5000 was used, the times for the fluid solver are highly dependent on problem setup since the fluid solver will iterate on a pressure poisson system. For this reason, the times on the scaling plots are only for ppiclF and not for any part of the fluid solver that it was coupled to.
So there was no ppiclF communication going on while fluid solver was running?
In any case I think all checklist items have been addressed so @jedbrown to me this looks good to go. Thanks!
There were barriers set so that just the timings for ppiclF could be tested. In real applications there may be some overlap of communication with computation though.
Thanks for your insight and suggestions @iljah and @markadams!
@dpzwick Some minor tweaks to the paper:
[-@
and @
syntax for your references. https://rmarkdown.rstudio.com/authoring_bibliographies_and_citations.html#citation_syntaxY
is exclusively mesh-based field data, for example. A couple sentences about what particle-in-cell methods are and how ppiclF interfaces with field-based solvers would be useful. This could cite traditional publications that would be relevant to a user interested in further details or applications.@whedon generate pdf
Attempting PDF compilation. Reticulating splines etc...
Thanks for the feedback @jedbrown .
I updated the PDF based on your suggestions.
Thanks. The ICPP booktitle
field needs case protection. For the Nek5000 and CMT-nek references, I would use this style, ... Navier-Stokes solver Nek5000 [-@nek5000] ...
.
@whedon generate pdf
Attempting PDF compilation. Reticulating splines etc...
The references are update now. Although it doesn't seem like the [-@nek5000] has an effect.
Ah, that formatting is because the bib entries don't have an author
field. Surely there are some people, or "Nek5000 Development Team", that make sense to list as authors.
I can probably find some people but that is the recommended citation style on their website. What do you think?
On Thu, May 2, 2019, 2:16 PM Jed Brown notifications@github.com wrote:
Ah, that formatting is because the bib entries don't have an author field. Surely there are some people, or "Nek5000 Development Team", that make sense to list as authors.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/1400#issuecomment-488776710, or mute the thread https://github.com/notifications/unsubscribe-auth/AFA3WTFUHEP4P5J3456Q3KDPTMVXHANCNFSM4HHQOSFQ .
Oh, I see what you mean. That's unfortunate. I don't think there's a way to format the link optimally, so would type it manually Nek5000 (2019)
and drop in the nocite
. If that looks right, we'll be ready to tag and archive.
@whedon generate pdf
Attempting PDF compilation. Reticulating splines etc...
@whedon generate pdf
Attempting PDF compilation. Reticulating splines etc...
The paper has been updated. Thanks!
Looks good to me. Please tag a release and archive on Zenodo or similar. We would prefer that both your source and documentation repository be archived together at the same DOI. I believe you can do that by directly uploading the files (instead of using the GitHub integration) from both repositories, then adding your metadata. Let me know if you have questions about that process, and please report the DOI back here when done.
I uploaded to Zenodo with the source and documentation. The DOI is: 10.5281/zenodo.2667285.
@whedon set 10.5281/zenodo.2667285 as archive
OK. 10.5281/zenodo.2667285 is the archive.
@whedon set v1.0.0 as version
OK. v1.0.0 is the version.
@whedon check references
Attempting to check references...
OK DOIs
- 10.1109/IPPS.1996.508039 is OK
MISSING DOIs
- None
INVALID DOIs
- None
@whedon accept
Attempting dry run of processing paper acceptance...
Submitting author: @dpzwick (David Zwick) Repository: https://github.com/dpzwick/ppiclF/ Version: v1.0.0 Editor: @jedbrown Reviewer: @iljah, @markadams Archive: 10.5281/zenodo.2667285
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@iljah & @markadams, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:
The reviewer guidelines are available here: https://joss.theoj.org/about#reviewer_guidelines. Any questions/concerns please let @jedbrown know.
✨ Please try and complete your review in the next two weeks ✨
Review checklist for @iljah
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
paper.md
file include a list of authors with their affiliations?Review checklist for @markadams
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
paper.md
file include a list of authors with their affiliations?