Open editorialbot opened 7 months ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.90 T=0.09 s (424.4 files/s, 602629.2 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
CSV 5 1 0 37679
Jupyter Notebook 18 0 10687 1974
TOML 2 254 1 1156
TeX 1 18 0 261
Julia 2 43 20 201
Markdown 2 91 0 121
YAML 1 1 4 18
JSON 6 0 0 6
-------------------------------------------------------------------------------
SUM: 37 408 10712 41416
-------------------------------------------------------------------------------
Commit count by author:
98 AndreasKuhn-ak
15 Sabine Fischer
6 Kilian Volmer
2 HackMD
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1038/d41586-019-02310-3 is OK
- 10.1137/141000671 is OK
- 10.48550/arXiv.1209.5145 is OK
- 10.1016/j.cosrev.2020.100254 is OK
- 10.48550/arXiv.2109.09973 is OK
- 10.1016/j.cpc.2018.02.004 is OK
- 10.21105/joss.03349 is OK
- 10.1002/mrm.28792 is OK
- 10.1007/s10614-020-09983-3 is OK
- 10.1371/journal.pone.0209358 is OK
- 10.48550/arXiv.2211.02740 is OK
- 10.48550/arXiv.2212.07293 is OK
- 10.21105/joss.03349 is OK
MISSING DOIs
- No DOI given, and none found for title: How to solve the same numerical Problem in 7 diffe...
- No DOI given, and none found for title: Julia Language Delivers Petascale HPC Performance
- No DOI given, and none found for title: Julia Micro-Benchmarks
INVALID DOIs
- None
Paper file info:
š Wordcount for paper.md
is 1166
ā
The paper includes a Statement of need
section
License info:
š” License found: Other
(Check here for OSI approval)
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
It looks to me that the license for the code is MIT and the license for the material is CC-SA-4.0. Can you confirm this @AndreasKuhn-ak?
Okay, we are ready to roll!
@jarvist and @gcdeshpande, thanks for agreeing to review this exciting work! If you work through the checklist and there are any problems/comments about the material, I recommend opening an issue on the material repository, and the authors can sort them out. More information about the review guidelines can be found on the Open Journals documentation pages: https://openjournals.readthedocs.io/en/jose/reviewer_guidelines.html
If anyone has any questions, ping me on here!
It looks to me that the license for the code is MIT and the license for the material is CC-SA-4.0. Can you confirm this @AndreasKuhn-ak?
Yes, that is correct.
Hey @jarvist and @gcdeshpande, I noticed that there has been no change on this review for a few months. Any chance of getting it going?
paper.md
file include a list of authors with their affiliations?Both the Release and the Paper are written targeting the Version 1.0 (Aug 2023) release. This is fine, but you may also want to update both as it's been a while & there are some edits made in the Autumn of 2023 @AndreasKuhn-ak .
In terms of the paper, I think one thing that would be useful to add is some idea of what level of programming background is ideal. You say that any scientific programmer can use it, but would the content still be useful for an intermediate or further on programmer? Similarly, in the 'experience of use' section it would be good to know what the background of the 13 self-study people were. PhD students? Undergraduates? People in industry? etc.
Also, is the intent for this material to be a self-study only course, or do you envisage that it would be useful / adaptable for classroom teaching? Some kind of more clear guide in the documentation would be really useful.
paper.md
file include a list of authors with their affiliations?Both the Release and the Paper are written targeting the Version 1.0 (Aug 2023) release. This is fine, but you may also want to update both as it's been a while & there are some edits made in the Autumn of 2023 @AndreasKuhn-ak .
Sorry for the late response. I was on vacation when you wrote, and I didn't check my emails thoroughly afterward. I can create a new release that incorporates the changes I made in Autumn 2023. I don't think the paper needs an update regarding the new release, as these changes were only corrections of some typos.
In terms of the paper, I think one thing that would be useful to add is some idea of what level of programming background is ideal. You say that any scientific programmer can use it, but would the content still be useful for an intermediate or further on programmer? Similarly, in the 'experience of use' section it would be good to know what the background of the 13 self-study people were. PhD students? Undergraduates? People in industry? etc.
I will expand this part in the paper and add the requested information.
Also, is the intent for this material to be a self-study only course, or do you envisage that it would be useful / adaptable for classroom teaching? Some kind of more clear guide in the documentation would be really useful.
I would say the primary intent is for this to be a self-study course, and we have used it in this way. However, I believe the course can also be effectively taught in a classroom setting for undergraduate students of any subject and graduate students without prior experience in programming, without any modifications. For example, a suitable format could be a block course where, in the morning, there is a teaching block of one or two lessons (depending on the size), and in the afternoon, students work alone or in groups on the exercises. In the evening or the next day, there could be a session where the solutions to the exercises are presented, and any questions regarding the exercises are answered.
I am very confident that this approach would work without major problems, as we teach a very similar Python course in exactly this format every semester for undergraduate biology students.
I will expand the target audience section accordingly.
@jarvist Thank you for taking the time to review our paper.
@jarvist, Thank you for your suggestions. I have updated the paper and README file based on your feedback. I believe it would be best to only create a new release once the review process is complete. This will ensure that the final, polished version is shared, rather than potentially releasing multiple updates during the review stage. Please let me know if you have any other suggestions or feedback. I appreciate you taking the time to review the materials and help me improve the quality of the work.
@editorialbot generate my checklist
I am investigating why this didn't work. Sorry, I didn't notice.
@gcdeshpande, could you please try again to generate the checklist? There was a space at the start of your comment that might have caused the editorial bot an issue.
@gcdeshpande just checking in. Could you try again with the checklist generation?
@jarvist, Thank you for your suggestions. I have updated the paper and README file based on your feedback. I believe it would be best to only create a new release once the review process is complete. This will ensure that the final, polished version is shared, rather than potentially releasing multiple updates during the review stage. Please let me know if you have any other suggestions or feedback. I appreciate you taking the time to review the materials and help me improve the quality of the work.
A pleasure! Please give me a poke once @gcdeshpande has been able to have a look, and I'll re-review.
I believe it's absolutely fine to 'stamp' the final release as part of the post-acceptance checklist, make sure all the different versions are sync'd up etc.
@gcdeshpande ā I've edited your comment manually and copied over a review checklist for you. I don't know why it failed when you issued the command, but we should be ready to proceed now. Thank you for your contributions!
@gcdeshpande, I have tried contacting by email with no reply. Are you still willing to review this work? If not, we will find another reviewer?
Submitting author: !--author-handle-->@AndreasKuhn-ak<!--end-author-handle-- (Andreas Kuhn) Repository: https://github.com/AndreasKuhn-ak/WS2022_Julia Branch with paper.md (empty if default branch): Version: V1.0.0 Editor: !--editor-->@arm61<!--end-editor-- Reviewers: @jarvist, @gcdeshpande Archive: Pending Paper kind: learning module
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@jarvist & @gcdeshpande, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://openjournals.readthedocs.io/en/jose/reviewer_guidelines.html. Any questions/concerns please let @arm61 know.
āØ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest āØ
Checklists
š Checklist for @jarvist