Open editorialbot opened 3 months ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1088/1367-2630/acce5a is OK
- 10.1103/PhysRevA.108.063512 is OK
MISSING DOIs
- No DOI given, and none found for title: Full Optical Control of Quantum Fluids of Light in...
INVALID DOIs
- None
Software report:
github.com/AlDanial/cloc v 1.90 T=0.03 s (836.3 files/s, 121132.8 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Python 22 242 485 2796
Markdown 2 94 0 215
YAML 2 3 4 37
TeX 1 3 0 32
-------------------------------------------------------------------------------
SUM: 27 342 489 3080
-------------------------------------------------------------------------------
Commit count by author:
144 Tangui Aladjidi
27 taladjidi
4 clpiekarski
3 ruggiamp@gmail.com
1 TANGUI ALADJIDI
Paper file info:
📄 Wordcount for paper.md
is 495
✅ The paper includes a Statement of need
section
License info:
✅ License found: MIT License
(Valid open source OSI approved license)
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot commands
Hello @Abinashbunty, here are the things you can ask me to do:
# List all available commands
@editorialbot commands
# Get a list of all editors's GitHub handles
@editorialbot list editors
# Adds a checklist for the reviewer using this command
@editorialbot generate my checklist
# Set a value for branch
@editorialbot set joss-paper as branch
# Run checks and provide information on the repository and the paper file
@editorialbot check repository
# Check the references of the paper for missing DOIs
@editorialbot check references
# Generates the pdf paper
@editorialbot generate pdf
# Generates a LaTeX preprint file
@editorialbot generate preprint
# Get a link to the complete list of reviewers
@editorialbot list reviewers
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1088/1367-2630/acce5a is OK
- 10.1103/PhysRevA.108.063512 is OK
MISSING DOIs
- No DOI given, and none found for title: Full Optical Control of Quantum Fluids of Light in...
INVALID DOIs
- None
@taladjidi I just noticed from the number of DOIs that there are only three references, and they are self-references. Is it correct that there is nothing else to cite from other authors?
Hi @RMeli, sorry for the late reply I was in leave, I will update the manuscript to add a more faithful bibliography, as well as clarify the statement of need and what this package brings with respect to other projects.
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Hello @obliviateandsurrender and @Abinashbunty. How are you? How is the review coming along? Please let me know if you need any help or guidance.
@RMeli I'm almost done with my review. Need to run some tests. By Friday I'll be done most likely.
Hey @RMeli! I should be able to finish my review by next week.
Thanks for the update! No pressure, I was just checking in to see if everything is going alright.
Hey all! I'm really sorry for the delay, but I just finished my first iteration of the review. Overall, excellent work @taladjidi! I am leaving some comments for your consideration, after which I will be happy to give this a green light -
FFTW
libraries to ease the installation for pyFFTW
. For example for my Mac machine, I had to install it via building from source.test/run_all_tests.py
doesn’t actually run the tests but rather just calls main()
methods for all the test_{}.py
. I couldn't see any assert
statements in these methods.tests/test_broadcasting.py
.pytest
for the tests to have a better control over automation and assessments like coverage. requirement.txt
for the library. Especially for building the documentation - I had to install many of the required sub-packages by mkdocs
one-by-one. Read The Docs
. Note, currently, the Install
section is not displayed in the docs.tutorials/examples
for using the library. Currently, there’s just one example in the README as a basic usage, which I feel is insufficient.
backend
. Additionally, I could not find any docs for the visualizations that could be produced from the output one gets after the simulations. Seems like a major strong point for the library, which could be shown via examples. FourierGPU.jl
and NLSE
.Figure 1
in the text. So you may reorder the figures accordingly. Additionally, it might be helpful for readers if you could rename the titles for sub-figures according to the caption. @editorialbot generate pdf
Thank you for the detailed list of comments @obliviateandsurrender. @taladjidi can you please have a look and address them?
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@RMeli I'm still finishing the review. Unfortunately I was sick for a few days. I'll be done with the review soon now. Sorry for the delay!
Hi @obliviateandsurrender, thanks for the review and your thoughtful remarks. I recently did a tutorial on this piece of software to my team so I have a jupyter notebook on hand, it should not take to much back and forth ! @RMeli I'll try to address everything by the end of the week. Thank you both again for your time
@Abinashbunty no worries and get well soon!
Hey all! I'm really sorry for the delay, but I just finished my first iteration of the review. Overall, excellent work @taladjidi! I am leaving some comments for your consideration, after which I will be happy to give this a green light -
1. Consider including a short note on the installation of `FFTW` libraries to ease the installation for `pyFFTW`. For example for my Mac machine, I had to install it via building from source.
This is now the case in the PyFFTW subsection from the README. In my case I just had to brew install it so I didn't think of it, but it was indeed not the case for some members of my team ! Thanks for the heads up.
2. Tests can be further improved - * I might be mistaken, but `test/run_all_tests.py` doesn’t actually run the tests but rather just calls `main()` methods for all the `test_{}.py`. I couldn't see any `assert` statements in these methods. * Make sure all the tests run. For example, I could not run `tests/test_broadcasting.py`. * Optional - using `pytest` for the tests to have a better control over automation and assessments like coverage.
I deleted the run_all_tests
script which was just for testing tests. There is a GitHub action that actually uses pytest to run all the tests, so this part is already covered. I improved test coverage by including array contiguity checks and memory alignment checks. However, I didn't manage to install the CUDA depedencies on the GitHub action, so for now it only checks for the CPU backend.
3. Consider adding `requirement.txt` for the library. Especially for building the documentation - I had to install many of the required sub-packages by `mkdocs` one-by-one.
Done !
4. Add more clear instructions to build the docs, and additionally, you may try hosting them via `Read The Docs`. Note, currently, the `Install` section is not displayed in the docs.
The docs now have a dedicated README file detailing build instructions, and the site should be a bit more polished now. Since it is a late addition, the site is not perfect yet but we are slowly getting there.
5. It would be really helpful if you could add some more `tutorials/examples` for using the library. Currently, there’s just one example in the README as a basic usage, which I feel is insufficient.
There is now a Jupyter notebook included in the "Tutorial" section including three different use cases and introducing the physical situation.
* One of these could be the code for the figures in the Paper, which could help me tick off the "reproducibility" section from my review. 6. The basic usage example doesn’t define the variable `backend`. Additionally, I could not find any docs for the visualizations that could be produced from the output one gets after the simulations. Seems like a major strong point for the library, which could be shown via examples. 7. Optional - The inheritance diagram might be adapted for dark mode on GitHub.
Done !
8. Suggestions from the paper - * In summary - “... equations is a crucial need to model realistic experimental scenarii.” change to “... equations is crucial to modelling realistic experimental scenarios.” * It might be nice to have a benchmark that reflects the mentioned “Statement of need” - to compare the computational resources for `FourierGPU.jl` and `NLSE`. * Functionality - change “Graphical Processing Units” to “graphics processing units” and “Cupy(Okuta)” to “Cupy (Okuta)”. * I couldn’t find any reference to `Figure 1` in the text. So you may reorder the figures accordingly. Additionally, it might be helpful for readers if you could rename the titles for sub-figures according to the caption.
I will try to address the other points today, but we are currently moving labs so this might be a bit tricky !
👋 @taladjidi, @obliviateandsurrender, and @Abinashbunty
Could you all provide a short few sentences/bullet points on how things are going with this review?
Thanks and keep up the great work!
Hi @RMeli @obliviateandsurrender, to follow up on your comments:
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Thanks for the update @taladjidi. Looking at Figure 2 again, one concern I have is the variability of the GPU implementation, which spans several orders of magnitude. How do you perform the benchmark and do you know why there is such a high variability? I think it would be interesting and good to increase the total runtime (i.e. use more steps), to see if the variability decreases.
Ah yes, sorry this was a mistake on my side ! I did not read carefully enough the docs of matplotlib :) I plotted as an error the min and max times, instead of plotting the difference between min and average ! The min and max are now within 10% of the average
@RMeli @obliviateandsurrender just pushed the new figs.
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Hi @RMeli I just finished my review finally. Excellent package and had a great time checking several features of it and having thorough walkthrough. 😄 My earlier comment about the DOIs is fixed. Feel free to suggest otherwise. Even the tests ran on my system. 🚀 I just had few suggestions:
Once changes related to this are implemented, review from my side will be complete (feedback wise).
Hi @Abinashbunty thanks for your comments.
I also added the full reference to all the classes, and fixed the docstrings so that now everything gets parsed correctly.
Thank you @taladjidi for the update. I checked and I approve of the same.
@RMeli My review is complete now. All the necessary changes have been implemented. Feel free to proceed further. 🚀
Thank you @Abinashbunty for completing your review, much appreciated!
@obliviateandsurrender how are you doing with your review? Please let me know if there is anything I can help with.
Thanks @RMeli! I will probably do it by Friday.
Hi all, are there any news on this ? To be completely transparent, I will soon leave the lab definitely as I switch jobs, I would really like to complete this review before then 👼
Hey @taladjidi! Sorry for the delay! Thank you for adding all the changes. @RMeli It looks good to go from my end. :rocket:
Thank you very much @Abinashbunty and @obliviateandsurrender for the review!
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot check references
Submitting author: !--author-handle-->@taladjidi<!--end-author-handle-- (Tangui Aladjidi) Repository: https://github.com/Quantum-Optics-LKB/NLSE Branch with paper.md (empty if default branch): Version: 2.3.0 Editor: !--editor-->@RMeli<!--end-editor-- Reviewers: @Abinashbunty, @obliviateandsurrender Archive: 10.5281/zenodo.12795395
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@Abinashbunty & @obliviateandsurrender, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @RMeli know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @Abinashbunty
📝 Checklist for @obliviateandsurrender