openjournals / joss-reviews

Reviews for the Journal of Open Source Software
Creative Commons Zero v1.0 Universal
712 stars 38 forks source link

[REVIEW]: Pyccel: a Python-to-X transpiler for scientific high-performance computing #4991

Closed editorialbot closed 1 year ago

editorialbot commented 1 year ago

Submitting author: !--author-handle-->@yguclu<!--end-author-handle-- (Yaman Güçlü) Repository: https://github.com/pyccel/pyccel/ Branch with paper.md (empty if default branch): joss-paper Version: v1.7.2 Editor: !--editor-->@danielskatz<!--end-editor-- Reviewers: @aholmes, @IgorBaratta, @boegel Archive: 10.5281/zenodo.7711108

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/8b565b8f36e1d0cccb3c749ff22857e3"><img src="https://joss.theoj.org/papers/8b565b8f36e1d0cccb3c749ff22857e3/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/8b565b8f36e1d0cccb3c749ff22857e3/status.svg)](https://joss.theoj.org/papers/8b565b8f36e1d0cccb3c749ff22857e3)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@aholmes & @IgorBaratta & @boegel, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @danielskatz know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @aholmes

📝 Checklist for @boegel

📝 Checklist for @IgorBaratta

editorialbot commented 1 year ago

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf
editorialbot commented 1 year ago
Software report:

github.com/AlDanial/cloc v 1.88  T=0.80 s (951.5 files/s, 119099.4 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                         616          15362          12849          52658
Markdown                        22           1089              0           3348
CMake                           31            495            567           1811
C                                6            170            273           1398
reStructuredText                56           1339           1230           1108
YAML                            19             82             70            761
C/C++ Header                     5             50            147            260
TeX                              2             23             26            209
Fortran 90                       2             99             30            186
DOS Batch                        2              8              1             43
INI                              1              0              0             11
TOML                             1              0              0             10
make                             1              4              6             10
Bourne Shell                     1              3             13              5
-------------------------------------------------------------------------------
SUM:                           765          18724          15212          61818
-------------------------------------------------------------------------------

gitinspector failed to run statistical information for the repository
editorialbot commented 1 year ago

Wordcount for paper.md is 1192

editorialbot commented 1 year ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1109/MCSE.2010.118 is OK
- 10.1088/1749-4680/8/1/014001 is OK
- 10.1145/1565824.1565827 is OK
- 10.1145/1238844.1238856 is OK

MISSING DOIs

- None

INVALID DOIs

- https://doi.org/10.1016/j.ascom.2014.12.001 is INVALID because of 'https://doi.org/' prefix
editorialbot commented 1 year ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

danielskatz commented 1 year ago

👋 @yguclu please fix the incorrectly formatted DOI that editorialbot found. After you make the change to your .bib file, then use the command @editorialbot check references to check again, and the command @editorialbot generate pdf when the references are right to make a new PDF. editorialbot commands need to be the first entry in a new comment.

danielskatz commented 1 year ago

👋 @aholmes, @IgorBaratta, and @boegel - Thanks for agreeing to review this submission. This is the review thread for the paper. All of our communications will happen here from now on.

As you can see above, you each should use the command @editorialbot generate my checklist to create your review checklist. @editorialbot commands need to be the first thing in a new comment.

As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines.

The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention openjournals/joss-reviews#4991 so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.

We aim for reviews to be completed within about 2-4 weeks. Please let me know if either of you require some more time. We can also use editorialbot (our bot) to set automatic reminders if you know you'll be away for a known period of time.

Please feel free to ping me (@danielskatz) if you have any questions/concerns.

boegel commented 1 year ago

Review checklist for @boegel

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

aholmes commented 1 year ago

Review checklist for @aholmes

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

EmilyBourne commented 1 year ago

@editorialbot check references

editorialbot commented 1 year ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1109/MCSE.2010.118 is OK
- 10.1088/1749-4680/8/1/014001 is OK
- 10.1016/j.ascom.2014.12.001 is OK
- 10.1145/1565824.1565827 is OK
- 10.1145/1238844.1238856 is OK

MISSING DOIs

- None

INVALID DOIs

- None
EmilyBourne commented 1 year ago

@editorialbot generate pdf

editorialbot commented 1 year ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

IgorBaratta commented 1 year ago

Review checklist for @IgorBaratta

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

danielskatz commented 1 year ago

👋 @aholmes, @IgorBaratta, and @boegel - it looks like this review is progressing, about 1 1/2 weeks in. Please let me know if you are having any issues I can help resolve so that we keep making progress.

aholmes commented 1 year ago

👋 @aholmes, @IgorBaratta, and @boegel - it looks like this review is progressing, about 1 1/2 weeks in. Please let me know if you are having any issues I can help resolve so that we keep making progress.

Going well. The authors are responsive and several issues have been resolved. The items I've identified have been resolved, so I'll be sweeping through the checklist again.

aholmes commented 1 year ago

@editorialbot generate pdf

editorialbot commented 1 year ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

IgorBaratta commented 1 year ago

@aholmes, @IgorBaratta, and @boegel - it looks like this review is progressing, about 1 1/2 weeks in. Please let me know if you are having any issues I can help resolve so that we keep making progress.

Hi, it's going well. I plan to complete the checklist by the beginning of next week.

aholmes commented 1 year ago

@danielskatz I have finished my review.

danielskatz commented 1 year ago

Thanks @aholmes!

danielskatz commented 1 year ago

👋 @IgorBaratta - how is your review coming?

danielskatz commented 1 year ago

👋 @boegel - how is your review coming?

boegel commented 1 year ago

👋 @boegel - how is your review coming?

@danielskatz My apologies for the delay!

I've worked through most of it, but I'm still playing with the pyccel benchmarks to see if I can reproduce the results included in the paper. I have some suggestion for improvements in that specific section too: the paper doesn't mention what type of hardware the benchmark results were collected on, or how many cores were used. In additional the use of -ffast-math is a bit concerning (see https://simonbyrne.github.io/notes/fastmath/).

I should open an issue on this in the pyccel repo, right?

danielskatz commented 1 year ago

Right, thanks

IgorBaratta commented 1 year ago

@danielskatz , my review checklist is now complete. All issues I've created on the repositories (main and benchmarks) have been addressed. So I'm happy to recommend it for publication.

danielskatz commented 1 year ago

Thanks @IgorBaratta

boegel commented 1 year ago

Update from my side: waiting for authors to come back with more reliable benchmark results - see https://github.com/pyccel/pyccel/issues/1295

danielskatz commented 1 year ago

👋 @yguclu & @EmilyBourne - can you let us know what's going on with https://github.com/pyccel/pyccel/issues/1295 ?

yguclu commented 1 year ago

@danielskatz We are running the benchmarks on two different supercomputing centers.

While the performance of Pyccel is in line with the test that we have run on GitHub Actions, the same is not true for Numba and Pythran (one is faster and the other much slower than before). Before presenting these results, we want to make sure that our configuration files are correct.

yguclu commented 1 year ago

@danielskatz After further investigation we have realized that our tests are not completely adequate for a fair comparison with Numba and Pythran. Before publication I would like to address the following urgent points:

  1. Allow the user (or developer) to run any given test manually without having to copy lots of setup instructions from the script run_benchmarks.py;
  2. Make sure that every test returns a number or an array, which can be checked for correctness against the pure-Python version;
  3. Increase the run-time of the accelerated functions to at least a good fraction of a second (currently some of our tests terminate after less than 1 micro-second).

I kindly request a week of time for taking care of this.

danielskatz commented 1 year ago

no worries @yguclu - please let us know when we should come back to this. (and note that performance results are not strictly required for JOSS, but if they are there, they are part of the review)

yguclu commented 1 year ago

The new benchmark suite is almost ready: see this PR.

danielskatz commented 1 year ago

👋 @boegel - see☝️

And I notice you have a couple of other unchecked items in your checklist - if you can comment on what else you think is needed, as well as your thoughts on the benchmark work, that would be great

yguclu commented 1 year ago

@danielskatz @boegel We have just merged the aforementioned PR and we consider the benchmark suite as final. This is now our checklist for completing the article:

We will check the items as we make progress. Please let me know if you see anything missing.

yguclu commented 1 year ago

@editorialbot generate pdf

editorialbot commented 1 year ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

boegel commented 1 year ago

Thanks for the extra work on the benchmarks. I won't have time to redo the review this week, but (early) next week should be possible...

yguclu commented 1 year ago

@editorialbot generate pdf

editorialbot commented 1 year ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

yguclu commented 1 year ago

@editorialbot commands

editorialbot commented 1 year ago

Hello @yguclu, here are the things you can ask me to do:


# List all available commands
@editorialbot commands

# Get a list of all editors's GitHub handles
@editorialbot list editors

# Check the references of the paper for missing DOIs
@editorialbot check references

# Perform checks on the repository
@editorialbot check repository

# Adds a checklist for the reviewer using this command
@editorialbot generate my checklist

# Set a value for branch
@editorialbot set joss-paper as branch

# Generates the pdf paper
@editorialbot generate pdf

# Generates a LaTeX preprint file
@editorialbot generate preprint

# Get a link to the complete list of reviewers
@editorialbot list reviewers
yguclu commented 1 year ago

@danielskatz @boegel We have just merged the aforementioned PR and we consider the benchmark suite as final. This is now our checklist for completing the article:

* [x]  Re-run the benchmarks (without the `-ffast-math` flag -- see [here](https://github.com/pyccel/pyccel-benchmarks/pull/17)) on a single dedicated node of our HPC facility

* [x]  Update the images in the manuscript

* [x]  Describe the architecture of the computing node in the manuscript (fixes [pyccel/pyccel/#1295](https://github.com/pyccel/pyccel/issues/1295))

* [x]  Update Pyccel version to 1.7.2 (and pin this version in the manuscript)

* [x]  Retrigger PDF build

We will check the items as we make progress. Please let me know if you see anything missing.

Our checklist is complete. But the editorial header should now point to Pyccel version 1.7.2 instead of 1.7.0.

danielskatz commented 1 year ago

@editorialbot set v1.7.2 as version

editorialbot commented 1 year ago

Done! version is now v1.7.2

yguclu commented 1 year ago

Thanks for the extra work on the benchmarks. I won't have time to redo the review this week, but (early) next week should be possible...

@boegel Have you had a chance to take another look?

danielskatz commented 1 year ago

👋 @boegel - can you take another look at this?

danielskatz commented 1 year ago

👋 @boegel - can you take another look at this?

boegel commented 1 year ago

Sorry for the lack of a response here, I've been quite busy the last couple of weeks...

I'll get back to it ASAP and finish my review

danielskatz commented 1 year ago

👋 @boegel - ping!