Closed editorialbot closed 1 year ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.88 T=0.10 s (532.5 files/s, 156966.2 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Jupyter Notebook 20 0 11341 1323
Python 16 352 304 963
Markdown 6 149 0 312
TeX 1 13 0 115
YAML 6 6 4 79
INI 1 6 0 43
CSS 1 4 0 18
-------------------------------------------------------------------------------
SUM: 51 530 11649 2853
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Wordcount for paper.md
is 2289
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.3390/info11020108 is OK
- 10.5281/zenodo.6639453 is OK
- 10.1016/j.cmpb.2021.106236 is OK
- 10.48550/arXiv.2301.00808 is OK
- 10.1093/comjnl/27.2.97 is OK
- 10.1038/s41597-022-01721-8 is OK
- 10.26044/ecr2020/C-05555 is OK
- 10.9781/ijimai.2021.05.002 is OK
- 10.3390/cancers14102372 is OK
MISSING DOIs
- None
INVALID DOIs
- None
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Hello again! 👋 @patrickraoulphilipp @NMontanaBrown @fepegar FYI @skaliy
This is the review thread for the paper. All of our higher-level communications will happen here from now on, review comments and discussion can happen in the repository of the project (details below).
📓 Please read the "Reviewer instructions & questions" in the comment from our editorialbot (above).
✅ All reviewers get their own checklist with the JOSS requirements - you generate them as per the details in the editorialbot comment. As you go over the submission, please check any items that you feel have been satisfied.
💻 The JOSS review is different from most other journals: The reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention the link to https://github.com/openjournals/joss-reviews/issues/5297 (this issue) so that a link is created to this thread. That will also help me to keep track!
❓ Please also feel free to comment and ask questions on this thread.
🎯 We aim for the review process to be completed within about 4-6 weeks but please make a start well ahead of this as JOSS reviews are by their nature iterative and any early feedback you may be able to provide to the author will be very helpful in meeting this schedule.
ping @fepegar and @patrickraoulphilipp Would you mind generating your checklists (information about how above) and work at least on the first two items? This is to make sure everything works well and the reviewing process is underway. Thanks! 🙏
@NMontanaBrown would you mind also working on the first item (COI policy)? Thanks!
Hi @britta-wstnr and @skaliy, I have a question about the "Substantial Scholarly Effort" part of the checklist.
"In addition, JOSS requires that software should be feature-complete (i.e., no half-baked solutions), packaged appropriately according to common community standards for the programming language being used (e.g., Python, R), and designed for maintainable extension (not one-off modifications of existing tools). “Minor utility” packages, including “thin” API clients, and single-function packages are not acceptable."
From my knowledge of MONAI, I am not convinced at the moment that this package fulfils the latter criterium. MONAI already provides all the functionality necessary that this package claims, including tutorials in the form of Jupyter notebooks, and the integration of TorchIO has already been achieved within MONAI, so this is not specifically a novelty as far as I'm aware.
@skaliy could you comment on this? and @britta-wstnr could you advise?
Dear @NMontanaBrown. Thank you for the feedback and for asking for our feedback! 👍
We are great fans of MONAI and their enormous efforts to support the entire “value chain” of AI in medical imaging, from data labeling to deployment in hospital infrastructure.
We naturally believe that there are several significant scholarly contributions in our work. Here are some of the main points:
fastMONAI is not only tightly integrated with MONAI and TorchIO but also fastai. The fastai library has a nice and flexible design, with a layered API, enabling both a low-code approach and the option to dig deep into the underlying PyTorch code. By building on fastai, we can bring these features into the medical imaging domain.
In a sense, we provide a wrapper. Still, it is a wrapper of three very powerful libraries, MONAI, fastai, and TorchIO (and they, of course, are, in a sense, wrappers of other libraries, like PyTorch, SimpleITK, NiBabel, PyDicom, and so on). But we’ve put significant effort into constructing a practical, robust wrapper, which is also well-documented (with the documentation automatically synchronized with code updates).
In general, even if the functionality and objectives of Project MONAI, nnU-Net, DeepMedic, and others overlap with what we’re trying to do in fastMONAI, we believe that the community benefits from a plethora of approaches.
We have found fastMONAI very useful for our research, enabling quick prototyping while having lower-level access and for teaching in university courses and workshops.
We think that the development process of fastMONAI (and fastai) using nbdev, and the fact that our paper manuscript is executable as a Jupyter Notebook, could be of interest to the broader open-source community served by JOSS.
Finally, I’m handing in my thesis in a week, so the past two months have been very hectic. But once some time is freed up, we have multiple plans for the further development of fastMONAI and adding new tutorials. We will also continue our work on integrating fastMONAI with our hospital’s (open source) research PACS system ref. https://bora.uib.no/bora-xmlui/handle/11250/3021987, https://github.com/mmiv-center/Research-Information-System
Again, we absolutely see where you are coming from, given how great the MONAI framework is, and we greatly appreciate that you allowed us to argue for our work.🙂
Thank you for the feedback @NMontanaBrown and thank you for your response @skaliy!
It would be very useful if @fepegar and @patrickraoulphilipp could give their opinion/assessment of this as well (and @NMontanaBrown of course feel free to further chime in or comment on @skaliy's reply).
Based on @skaliy's reply, I would say the question to answer is whether this statement from our documentation is applicable (see added emphasis): "Your software should be a significant contribution to the available open source software that either enables some new research challenges to be addressed or makes addressing research challenges significantly better (e.g., faster, easier, simpler)." (from https://joss.readthedocs.io/en/latest/submitting.html#substantial-scholarly-effort)
Hi, all. I've performed a thorough pass on the paper and repository. I got an error while running the notebook/paper, which I reported, as you might have seen above.
I've been waiting for your replies to @NMontanaBrown's comment, with which I strongly agree.
I recommend that the paper is rejected.
From my point of view, fastMONAI comprises a small set of convenience tools and thin wrappers that have been useful for some of the author's research projects, but I do not think it addresses research challenges for a general audience of researchers in the field.
I will reply to some of @skaliy's comments below.
- In a sense, we provide a wrapper. Still, it is a wrapper of three very powerful libraries, MONAI, fastai, and TorchIO (and they, of course, are, in a sense, wrappers of other libraries, like PyTorch, SimpleITK, NiBabel, PyDicom, and so on). But we’ve put significant effort into constructing a practical, robust wrapper, which is also well-documented (with the documentation automatically synchronized with code updates).
I think that most of the features of e.g. MONAI and TorchIO use or are built on top of PyTorch, SimpleITK, etc., as opposed to being wrappers of those lower-level libraries.
Here are some samples from the documentation:
Here is the code for the second screenshot:
# %% ../nbs/03_vision_augment.ipynb 8
def do_pad_or_crop(o, target_shape, padding_mode, mask_name, dtype=torch.Tensor):
pad_or_crop = tio.CropOrPad(target_shape=target_shape, padding_mode=padding_mode, mask_name=mask_name)
return dtype(pad_or_crop(o))
# %% ../nbs/03_vision_augment.ipynb 9
class PadOrCrop(DisplayedTransform):
'''Resize image using TorchIO `CropOrPad`.'''
order=0
def __init__(self, size, padding_mode=0, mask_name=None):
if not is_listy(size): size=[size,size,size]
self.size, self.padding_mode, self.mask_name = size, padding_mode, mask_name
def encodes(self, o:(MedImage, MedMask)):
return do_pad_or_crop(o,target_shape=self.size, padding_mode=self.padding_mode, mask_name=self.mask_name, dtype=type(o))
I would argue that many of these transforms are very thin wrappers, with hard-coded arguments that might not generalize to many applications. Moreover, they are not "well-documented" as claimed.
In general, even if the functionality and objectives of Project MONAI, nnU-Net, DeepMedic, and others overlap with what we’re trying to do in fastMONAI, we believe that the community benefits from a plethora of approaches.
We have found fastMONAI very useful for our research, enabling quick prototyping while having lower-level access and for teaching in university courses and workshops.
It's not clear whether the community is benefiting from fastMONAI.
It seems that most of the stars on the GitHub repo (27/38) were obtained within the first month of its existence (September 2022), and the count has been growing very slowly since then, suggesting little adoption from the community.
There are no dependents of the library on GitHub.
Five papers are cited in the "Research projects using fastMONAI" section of the paper. I believe that 1) @skaliy is an author in all of them, and 2) fastMONAI is not mentioned in the papers.
We think that the development process of fastMONAI (and fastai) using nbdev, and the fact that our paper manuscript is executable as a Jupyter Notebook, could be of interest to the broader open-source community served by JOSS.
nbdev is not a standard framework in widely used libraries in the field. Writing code in notebooks and generating the Python library programmatically from them makes the repository harder to navigate and understand, from my point of view. Moreover, there is not a clear or standard structure for unit tests.
I am not sure that the fact that the paper is executable as a Jupyter notebook is of interest to the broader open-source community.
I suppose many of these issues are inherited from https://github.com/fastai/nbdev-template, of which fastMONAI is a fork.
The name fastMONAI
(fastai + MONAI?) suggests that the library is a faster version of MONAI, which is not the case. I suggest using a different name.
Consider the following snippet from the library.
# %% ../nbs/03_vision_augment.ipynb 37
class RandomMotion(DisplayedTransform):
'''Apply TorchIO `RandomMotion`.'''
split_idx,order=0,1
def __init__(self, degrees=10, translation=10, num_transforms=2, image_interpolation='linear', p=0.5):
self.degrees,self.translation, self.num_transforms, self.image_interpolation, self.p = degrees,translation, num_transforms, image_interpolation, p
def encodes(self, o:(MedImage)): return MedImage.create(_do_rand_motion(o, degrees=self.degrees,translation=self.translation, num_transforms=self.num_transforms, image_interpolation=self.image_interpolation, p=self.p))
def encodes(self, o:(MedMask)):return o
It would greatly help to follow a common coding style, such as PEP8, to improve readability and maintainability (and reviewing). Some better practices would be:
I would recommend a combination of pre-commit
, black
and flake8
to enforce a consistent, standard coding style.
Certain typing annotations are invalid, as in the following snippet.
class PadOrCrop(DisplayedTransform):
'''Resize image using TorchIO `CropOrPad`.'''
order=0
def __init__(self, size, padding_mode=0, mask_name=None):
if not is_listy(size): size=[size,size,size]
self.size, self.padding_mode, self.mask_name = size, padding_mode, mask_name
def encodes(self, o:(MedImage, MedMask)):
return do_pad_or_crop(o,target_shape=self.size, padding_mode=self.padding_mode, mask_name=self.mask_name, dtype=type(o))
The following line, very typical in fastai
, I believe, makes reviewing also quite difficult:
from fastai.data.all import *
It is also used in the paper/notebook/tutorial, making it difficult to understand where some functions are classes come from.
Here is one of the many resources explaining why this is typically considered bad practice: https://www.geeksforgeeks.org/why-import-star-in-python-is-a-bad-idea/
The paper does not clearly justify the need for this library.
Most of the content is a tutorial, which would belong in the documentation instead.
The "IXI Tiny" dataset is a TorchIO class, which is not referenced in the paper.
From the paper:
"To ease further extensions of our library through contributions, we have added a short guide on how to contribute to the project"
However, the CONTRIBUTING.md
file only contains the following text:
How to contribute
fastMONAI follows the same contribution policy as fastai: https://github.com/fastai/nbdev/blob/master/CONTRIBUTING.md
Thank you for your thorough feedback @fepegar! I am, of course, sad that you didn't find the work sufficiently valuable, but I appreciate the time and effort you've put into examining the paper and the code repository.
I understand the concerns you've raised and will absolutely consider them. I already planned to refactor the code to improve readability and maintainability, and I will follow common coding styles such as PEP8.
I'm also aware that "import " is considered bad practice, and its usage here was intended for non-technical users (as highlighted in the link, "_All of this does not mean that using import is always bad_"). I had included an explanation in an earlier cell, which I later removed, but I will revisit this decision.
I agree that parts of the library can be considered thin wrappers around TorchIO. However, the hard-coded arguments you mentioned are default values that users can change to suit their needs.
Again, I appreciate your input and suggestions. I will address these issues and try to improve the project's overall quality.
Thank you for your very detailed review @fepegar! And thank you for your answer, @skaliy.
@skaliy, you write that you would like to improve the "project's overall quality", but given the point about the thin wrappers that @fepegar mentioned (which you seem to agree with in your reply) and the points that @NMontanaBrown raised, I wonder if that can realistically be done in a review-timeline?
Hi all,
first of all thank you for you efforts on the fastMONAI library, it is very interesting and promising! Here are some initial thoughts and questions of mine after reading the manuscript and resources, as well as testing the framework.
I agree with prior mentioned points, especially:
(1) No clear distinction / explanation of added value of the project wrt MONAI and TorchIO. It would highly beneficial to add clear cases where fastMONAI eases prototyping and developing. It would be help the raised question on deciding if sufficient added value is provided for the journal. Which additional model architectures / trained models can I use now? How much glue code would be necessary otherwise? Where does fastMONAI go beyond the existing possibility to use MONAI and TorchIO together?
(2) Use of * imports as used in fastai / inconsistent code formatting. As said before, this does make understanding and debugging code difficult. But I am glad to hear that upcoming refactorings might get rid of this.
I started testing the software on collab and locally (using Windows). I had no problems to go though the example notebooks via collab, but ran into problems locally in Windows (when installing from source and when training a model). Maybe I did oversee, but which Python versions are supported / recommended? I will try again.
Thank you very much for your feedback @patrickraoulphilipp ! @skaliy can you have a look and also address my question from above:
@skaliy, you write that you would like to improve the "project's overall quality", but given the point about the thin wrappers that @fepegar mentioned (which you seem to agree with in your reply) and the points that @NMontanaBrown raised, I wonder if that can realistically be done in a review-timeline?
Thanks, everyone! 🙏
pinging @skaliy - can you have a look? Thanks! 🙏
Thank you for your valuable feedback @patrickraoulphilipp!
To address your concerns:
@britta-wstnr, we regret to inform you that, given the points raised and the bar set by @fepegar, and @NMontanaBrown, we don't believe it will be possible to sufficiently improve the project's overall quality within the review timeline.
Once again, thank you all for your valuable feedback. We will continue working on improving the fastMONAI library and address your concerns.
@skaliy, thank you for your reply. Do I interpret this correctly that you withdraw your submission to improve it outside the scope of the review process?
Yes, that's correct, @britta-wstnr. Thank you for your time.
Thank you for confirming @skaliy. Thank you everybody for your interactions here and especially @NMontanaBrown @fepegar @patrickraoulphilipp for your time and effort in reviewing this! 🙏 🌱
@oliviaguest can you take over and close this? Thanks! 🙏
@editorialbot withdraw
Paper withdrawn.
Submitting author: !--author-handle-->@skaliy<!--end-author-handle-- (Satheshkumar Kaliyugarasan) Repository: https://github.com/MMIV-ML/fastMONAI Branch with paper.md (empty if default branch): Version: v0.3.0 Editor: !--editor-->@britta-wstnr<!--end-editor-- Reviewers: @fepegar, @NMontanaBrown, @patrickraoulphilipp Archive: Pending
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@fepegar & @NMontanaBrown & @patrickraoulphilipp, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @britta-wstnr know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @fepegar
📝 Checklist for @NMontanaBrown
📝 Checklist for @patrickraoulphilipp