Open editorialbot opened 1 month ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
✅ OK DOIs
- 10.1016/j.prro.2021.02.003 is OK
- 10.48550/arXiv.2211.02701 is OK
- 10.1016/j.mri.2012.05.001 is OK
- 10.1158/0008-5472.can-18-0125 is OK
- 10.1016/j.cmpb.2021.106236 is OK
- 10.1038/s41598-023-41475-w is OK
- 10.1016/j.jmir.2024.101745 is OK
🟡 SKIP DOIs
- None
❌ MISSING DOIs
- None
❌ INVALID DOIs
- 10.21105/joss.04675 is INVALID
Software report:
github.com/AlDanial/cloc v 1.90 T=0.02 s (1016.4 files/s, 95061.3 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Python 13 294 130 925
Markdown 3 67 0 218
TeX 1 7 0 93
YAML 2 6 0 37
-------------------------------------------------------------------------------
SUM: 19 374 130 1273
-------------------------------------------------------------------------------
Commit count by author:
119 Asim Shrestha
33 Fereshteh Yousefirizi
27 Adam Watkins
9 Adam
5 Zhack47
4 ThomasBudd
4 zhack47
3 Adam Zyzik
3 Pedro Esquinas
2 Carlos F. Uribe
2 Pedro
2 Robin Hegering
2 asim-shrestha
2 igorhlx
1 Maxence Larose
1 Phillip Chlap
1 Samuel Ouellet
1 Tom Roberts
Paper file info:
📄 Wordcount for paper.md
is 1722
✅ The paper includes a Statement of need
section
License info:
✅ License found: MIT License
(Valid open source OSI approved license)
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@pchlap, @suyashkumar, @xtie97, this is the review thread for the paper. All of our communications will happen here from now on.
As a reviewer, the first step is to create a checklist for your review by entering:
@editorialbot generate my checklist
as the top of a new comment in this thread.
These checklists contain the JOSS requirements. As you go over the submission, please check any items that you feel have been satisfied. The first comment in this thread also contains links to the JOSS reviewer guidelines.
The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention openjournals/joss-reviews#7361 so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.
Hardware submissions can be tricky to review, as not all reviewers will necessarily be able to review all functionality. I hope with three reviewers we can cover anything. However, if there are aspects of the submission you can't review, please let me know.
We aim for reviews to be completed within about 2-4 weeks. Please let me know if any of you require some more time. We can also use EditorialBot (our bot) to set automatic reminders if you know you'll be away for a known period of time.
Please feel free to ping me (@adamltyson) if you have any questions/concerns.
@qurit-frizi just so that all outstanding comments are in the same thread, could you take a look at the invalid DOI, and see if the word count can be cut a bit? Thanks!
Hi @pchlap, @suyashkumar, @xtie97 👋
How is the review going? Let me know if you have any questions.
Hi @adamltyson, @qurit-frizi,
Thanks for having me review the JOSS submission for this great library. I have used RT-Utils often in the past and think its a really useful and important tool within Radiation Oncology research.
I do think a bit of work is required to bring this submission in line with JOSS requirements. I have left my comments specific to each JOSS criteria below. In particular I think the JOSS paper needs to be shortened, omitting certain technical details (moving these to the project documentation) and focussing on the JOSS requirements. I recommend referring to what a JOSS paper should contain and make amendments as necessary. I'd also recommend referring to other JOSS papers to see what they include/exclude (e.g. PyMedPhys: A community effort to develop an open, Python-based standard library for medical physics applications).
I'm here for any questions or clarification needed. I look forward to reviewing the next iteration of this submission!
Contribution and authorship: Has the submitting author (@qurit-frizi) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
Could you provide a breakdown of each author's contributions (where it's not already clear from Git commit history)? @qurit-frizi seems to have made changes only to the README.md
file and I can not determine Armam Rahmim's GitHub user ID to determine their contribution.
Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
It would be useful if example data was provided to easily be able to run the examples. This is optional but would be great for someone looking for a quickstart with the tool. Consider hosting some test data on Zenodo which is downloaded as part of the example. Ideally the example would also include an example mask that can then be used in the add_roi function (and not requiring the user to specific this mask to run the example). The example should be runnable from start to finish without requiring the user to make changes to use their data.
Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
API decription in documentation is incomplete, some details around the add_roi
function are included, but other functions which are available and details around their inputs and functionality are missing.
Community guidelines: Are there clear guidelines for third parties wishing to 1. Contribute to the software 2. Report issues or problems with the software 3. Seek support
No community guidelines for contribution to the open-source library are available.
Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
I feel the summary assumes knowledge of the medical imaging domain, such as what the DICOM standard is and how structures are represented in the RTSTRUCT modality. Providing some additional details for non-specialist audience would be useful.
A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
I would like clarification on the statement ...integrating auto-segmentation solutions using deep learning in clinical environments is rare due to the lack of open-source frameworks that handle DICOM RT-Structure sets effectively.
This has not necessarily been the primary reason why clinical translation of DL auto-segmentation is challenging. Could you provide a reference? Furthermore, you then list other packages available to achieve this, but do not describe their shortcomings (why not just use them)?
It's also not clear how this tool 'optimizes workflows
' as is claimed in the statement of need. This needs further clarification what is meant by this.
State of the field: Do the authors describe how this software compares to other commonly-used packages?
As mentioned above, it isn't clear what this library offers in comparison to others available. The real-world example sections also doesn't sufficiently describe the quality of results between this tool and others tested. Some quantative metrics to confirm this would be good which could be presented in a table or plot. The graphic is Figure 1 doesn't support the claims made in the text (for me at least).
Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
The paper is well written overall, however I feel it contains too much specific detail for a JOSS paper. A lot of this detail would be better suited in the tool documentation, where an example could be provided to better understand a specific functionality. I think the content in the Available Manipulations, Handling of DICOM Headers, Incorporating an ROI Mask and Practical Applications would be better included in the documentation.
I also do not recommend describing that the tool is hosted on GitHub with the number of stars, and the installation procedure in the JOSS paper. This can easily be found on the GitHub page and documentation.
The use of the term "module" throughout the paper is confusing. Usually this would refer to a specific module within the tool or library. Consider rewording these parts.
Also note the bullet list in the Available Manipulations section isn't rendering properly.
References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?
Consider adding a reference I mentioned above (to support the claim that clinical translation of DL models is rare due lack of open-source DICOM RTSTRUCT conversion tools). Also check citation syntax since some references don't seem to render properly in the text.
Hi @suyashkumar, @xtie97, how are the reviews going? Let me know if I can clarify anything.
@qurit-frizi, not sure if you've started addressing @pchlap's comments, but addressing these (and posting here) may help speed up the other reviews.
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Hi @suyashkumar, @xtie97, how are the reviews going? Let me know if I can clarify anything.
@qurit-frizi, not sure if you've started addressing @pchlap's comments, but addressing these (and posting here) may help speed up the other reviews.
Surely
Submitting author: !--author-handle-->@qurit-frizi<!--end-author-handle-- (Fereshteh Yousefi Rizi) Repository: https://github.com/qurit/rt-utils Branch with paper.md (empty if default branch): development Version: V1.2.0 Editor: !--editor-->@adamltyson<!--end-editor-- Reviewers: @pchlap, @suyashkumar, @xtie97 Archive: Pending
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@pchlap & @suyashkumar & @xtie97, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @adamltyson know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @pchlap
📝 Checklist for @xtie97
📝 Checklist for @suyashkumar