Closed editorialbot closed 1 year ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.88 T=0.80 s (951.5 files/s, 119099.4 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Python 616 15362 12849 52658
Markdown 22 1089 0 3348
CMake 31 495 567 1811
C 6 170 273 1398
reStructuredText 56 1339 1230 1108
YAML 19 82 70 761
C/C++ Header 5 50 147 260
TeX 2 23 26 209
Fortran 90 2 99 30 186
DOS Batch 2 8 1 43
INI 1 0 0 11
TOML 1 0 0 10
make 1 4 6 10
Bourne Shell 1 3 13 5
-------------------------------------------------------------------------------
SUM: 765 18724 15212 61818
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Wordcount for paper.md
is 1192
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1109/MCSE.2010.118 is OK
- 10.1088/1749-4680/8/1/014001 is OK
- 10.1145/1565824.1565827 is OK
- 10.1145/1238844.1238856 is OK
MISSING DOIs
- None
INVALID DOIs
- https://doi.org/10.1016/j.ascom.2014.12.001 is INVALID because of 'https://doi.org/' prefix
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
👋 @yguclu please fix the incorrectly formatted DOI that editorialbot found. After you make the change to your .bib file, then use the command @editorialbot check references
to check again, and the command @editorialbot generate pdf
when the references are right to make a new PDF. editorialbot commands need to be the first entry in a new comment.
👋 @aholmes, @IgorBaratta, and @boegel - Thanks for agreeing to review this submission. This is the review thread for the paper. All of our communications will happen here from now on.
As you can see above, you each should use the command @editorialbot generate my checklist
to create your review checklist. @editorialbot commands need to be the first thing in a new comment.
As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines.
The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention openjournals/joss-reviews#4991
so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.
We aim for reviews to be completed within about 2-4 weeks. Please let me know if either of you require some more time. We can also use editorialbot (our bot) to set automatic reminders if you know you'll be away for a known period of time.
Please feel free to ping me (@danielskatz) if you have any questions/concerns.
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1109/MCSE.2010.118 is OK
- 10.1088/1749-4680/8/1/014001 is OK
- 10.1016/j.ascom.2014.12.001 is OK
- 10.1145/1565824.1565827 is OK
- 10.1145/1238844.1238856 is OK
MISSING DOIs
- None
INVALID DOIs
- None
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
👋 @aholmes, @IgorBaratta, and @boegel - it looks like this review is progressing, about 1 1/2 weeks in. Please let me know if you are having any issues I can help resolve so that we keep making progress.
👋 @aholmes, @IgorBaratta, and @boegel - it looks like this review is progressing, about 1 1/2 weeks in. Please let me know if you are having any issues I can help resolve so that we keep making progress.
Going well. The authors are responsive and several issues have been resolved. The items I've identified have been resolved, so I'll be sweeping through the checklist again.
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@aholmes, @IgorBaratta, and @boegel - it looks like this review is progressing, about 1 1/2 weeks in. Please let me know if you are having any issues I can help resolve so that we keep making progress.
Hi, it's going well. I plan to complete the checklist by the beginning of next week.
@danielskatz I have finished my review.
Thanks @aholmes!
👋 @IgorBaratta - how is your review coming?
👋 @boegel - how is your review coming?
👋 @boegel - how is your review coming?
@danielskatz My apologies for the delay!
I've worked through most of it, but I'm still playing with the pyccel benchmarks to see if I can reproduce the results included in the paper.
I have some suggestion for improvements in that specific section too: the paper doesn't mention what type of hardware the benchmark results were collected on, or how many cores were used.
In additional the use of -ffast-math
is a bit concerning (see https://simonbyrne.github.io/notes/fastmath/).
I should open an issue on this in the pyccel repo, right?
Right, thanks
@danielskatz , my review checklist is now complete. All issues I've created on the repositories (main and benchmarks) have been addressed. So I'm happy to recommend it for publication.
Thanks @IgorBaratta
Update from my side: waiting for authors to come back with more reliable benchmark results - see https://github.com/pyccel/pyccel/issues/1295
👋 @yguclu & @EmilyBourne - can you let us know what's going on with https://github.com/pyccel/pyccel/issues/1295 ?
@danielskatz We are running the benchmarks on two different supercomputing centers.
While the performance of Pyccel is in line with the test that we have run on GitHub Actions, the same is not true for Numba and Pythran (one is faster and the other much slower than before). Before presenting these results, we want to make sure that our configuration files are correct.
@danielskatz After further investigation we have realized that our tests are not completely adequate for a fair comparison with Numba and Pythran. Before publication I would like to address the following urgent points:
run_benchmarks.py
;I kindly request a week of time for taking care of this.
no worries @yguclu - please let us know when we should come back to this. (and note that performance results are not strictly required for JOSS, but if they are there, they are part of the review)
👋 @boegel - see☝️
And I notice you have a couple of other unchecked items in your checklist - if you can comment on what else you think is needed, as well as your thoughts on the benchmark work, that would be great
@danielskatz @boegel We have just merged the aforementioned PR and we consider the benchmark suite as final. This is now our checklist for completing the article:
-ffast-math
flag -- see here) on a single dedicated node of our HPC facility We will check the items as we make progress. Please let me know if you see anything missing.
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Thanks for the extra work on the benchmarks. I won't have time to redo the review this week, but (early) next week should be possible...
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot commands
Hello @yguclu, here are the things you can ask me to do:
# List all available commands
@editorialbot commands
# Get a list of all editors's GitHub handles
@editorialbot list editors
# Check the references of the paper for missing DOIs
@editorialbot check references
# Perform checks on the repository
@editorialbot check repository
# Adds a checklist for the reviewer using this command
@editorialbot generate my checklist
# Set a value for branch
@editorialbot set joss-paper as branch
# Generates the pdf paper
@editorialbot generate pdf
# Generates a LaTeX preprint file
@editorialbot generate preprint
# Get a link to the complete list of reviewers
@editorialbot list reviewers
@danielskatz @boegel We have just merged the aforementioned PR and we consider the benchmark suite as final. This is now our checklist for completing the article:
* [x] Re-run the benchmarks (without the `-ffast-math` flag -- see [here](https://github.com/pyccel/pyccel-benchmarks/pull/17)) on a single dedicated node of our HPC facility * [x] Update the images in the manuscript * [x] Describe the architecture of the computing node in the manuscript (fixes [pyccel/pyccel/#1295](https://github.com/pyccel/pyccel/issues/1295)) * [x] Update Pyccel version to 1.7.2 (and pin this version in the manuscript) * [x] Retrigger PDF build
We will check the items as we make progress. Please let me know if you see anything missing.
Our checklist is complete. But the editorial header should now point to Pyccel version 1.7.2 instead of 1.7.0.
@editorialbot set v1.7.2 as version
Done! version is now v1.7.2
Thanks for the extra work on the benchmarks. I won't have time to redo the review this week, but (early) next week should be possible...
@boegel Have you had a chance to take another look?
👋 @boegel - can you take another look at this?
👋 @boegel - can you take another look at this?
Sorry for the lack of a response here, I've been quite busy the last couple of weeks...
I'll get back to it ASAP and finish my review
👋 @boegel - ping!
Submitting author: !--author-handle-->@yguclu<!--end-author-handle-- (Yaman Güçlü) Repository: https://github.com/pyccel/pyccel/ Branch with paper.md (empty if default branch): joss-paper Version: v1.7.2 Editor: !--editor-->@danielskatz<!--end-editor-- Reviewers: @aholmes, @IgorBaratta, @boegel Archive: 10.5281/zenodo.7711108
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@aholmes & @IgorBaratta & @boegel, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @danielskatz know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @aholmes
📝 Checklist for @boegel
📝 Checklist for @IgorBaratta