Closed editorialbot closed 2 months ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.88 T=0.07 s (204.5 files/s, 56018.4 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Python 4 271 283 1888
CSS 3 139 174 472
HTML 1 60 0 267
TeX 1 19 0 197
Markdown 3 41 0 164
YAML 2 27 4 101
JSON 1 0 0 1
-------------------------------------------------------------------------------
SUM: 15 557 461 3090
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1038/ncomms14748 is OK
- 10.1038/nn.4500 is OK
- 10.1371/journal.pone.0200562 is OK
- 10.1002/jmri.23572 is OK
- 10.1016/j.dcn.2020.100803 is OK
- 10.1080/2326263X.2017.1307096 is OK
- 10.1002/hbm.24683 is OK
- 10.1016/j.neuroimage.2021.118527 is OK
- 10.1016/j.neuroimage.2019.03.046 is OK
- 10.34973/cwja-hc66 is OK
- 10.3389/fninf.2021.770608 is OK
- 10.1038/sdata.2016.44 is OK
MISSING DOIs
- None
INVALID DOIs
- None
Wordcount for paper.md
is 515
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@jsheunis @neuroccino, please start your reviews here when you have a chance (see first comment in this issue about creating your checklist) or otherwise let me know if you have any questions!
@jsheunis @neuroccino can you get started please? To do so just call @editorialbot generate my checklist
@cMadan please check in here and try to spur things along.
@neuroccino, it would be great if you could start your review checklist. If you have any questions before you get started, do let me know.
@jsheunis, how are things going? It would be great if you could continue with this soon.
Sorry for my long delay with this. Life...
I'm busy reviewing now.
I went through all checklist items and created relevant issues. I can check off the remaining open boxes once those issues are addressed. There's one remaining point that I would like to comment on, and that is the question of "Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines"
I find it difficult to pass clear judgement on this item, i.e. to give a YES/NO answer. Looking at the specific items mentioned in the JOSS guidelines:
Age of software (is this a well-established software project) / length of commit history
This is fine, the tool is several years in development
Number of commits.
No problem here.
Number of authors.
Might be slightly concerning because it's two authors, but actually is not a problem in and of itself.
Total lines of code (LOC). Submissions under 1000 LOC will usually be flagged, those under 300 LOC will be desk rejected.
No specific reason for concern
Whether the software has already been cited in academic papers.
It has, but by the authors themselves in their other work.
Whether the software is sufficiently useful that it is likely to be cited by your peer group.
I am uncertain about this. The main reason is that ScanSessionTool seems like a useful package (no doubt) developed by and for the authors and their close collaborators. There isn't clear evidence to me that the package has taken hold in the wider open source community or that it will necessarily expand much beyond the existing user group. In my experience, each MRI data collection site or group has their own approach to organising DICOM data (which might be problematic or extremely streamlined/intuitive, or somewhere in between), and there is no standard way of doing this. ScanSessionTool proposes a standard hierarchy (if I understood the documentation correctly?), in essence implying the adoption of a data organization standard and not merely the use of a tool. The adoption of a standard is a much greater challenge than using a tool built around a specific standard that has already been adopted. My suspicion is that groups might choose to not use the tool because they do not want to change their standard approach (because they have status quo bias or changing will affect to many downstream processes or whatever reason).
My viewpoint is also somewhat related to the JOSS point that
software should be feature-complete (i.e., no half-baked solutions), packaged appropriately according to common community standards [...], and designed for maintainable extension. “Minor utility” packages, including “thin” API clients, and single-function packages are not acceptable.
It would not be fair to call ScanSessionTool a minor utility package (and I'm not doing that) but some aspects of its current implementation do lean towards that direction. I'm thinking specifically about the 2.4k-line single module, the absence of unit-tests and continuous integration, and the package's (currently) limited contributor community. However, these can all be addressed.
I'm happy to receive arguments/evidence against my point of view. In any event, I don't think my being on the fence on this particular point should constitute any form of rejection (in the traditional journal-centric sense of the word). IMO this is a useful package that people should know about and can benefit from. I would appreciate thoughts on this from other the reviewer(s), the editor, and authors.
Thanks for reviewing and for your suggestions and opened issues!
Concerning your concern on substantial scholarly effort: We of course went through the checklist in detail before making the decision to submit to JOSS, and we were confident that we fulfill all of the requirements. You seem to also agree with this in general, but have a slight concern about one point specifically ("Whether the software is sufficiently useful that it is likely to be cited by your peer group.").
I am uncertain about this. The main reason is that ScanSessionTool seems like a useful package (no doubt) developed by and for the authors and their close collaborators. There isn't clear evidence to me that the package has taken hold in the wider open source community or that it will necessarily expand much beyond the existing user group. In my experience, each MRI data collection site or group has their own approach to organising DICOM data (which might be problematic or extremely streamlined/intuitive, or somewhere in between), and there is no standard way of doing this. ScanSessionTool proposes a standard hierarchy (if I understood the documentation correctly?), in essence implying the adoption of a data organization standard and not merely the use of a tool. The adoption of a standard is a much greater challenge than using a tool built around a specific standard that has already been adopted. My suspicion is that groups might choose to not use the tool because they do not want to change their standard approach (because they have status quo bias or changing will affect to many downstream processes or whatever reason).
While it is true that we initially created this software for our own usage (which is probably the case for most research software), we don't see any reason why it shouldn't be useful to others as well. Not only is the scan session documentation and protocol logging part useful in itself (i.e. without the archiving), but I also don't think that the archiving hierarchy we use here has much impact on downstream pipelines, given that most of these pipelines nowadays resolve around the BIDS format, to which our structure can be converted to easily (e.g. using BIDS Coiner, as mentioned in the paper).
When thinking about the perceived lack of current adoption of the tool, there are multiple things to consider:
It would not be fair to call ScanSessionTool a minor utility package (and I'm not doing that) but some aspects of its current implementation do lean towards that direction. I'm thinking specifically about the 2.4k-line single module, the absence of unit-tests and continuous integration, and the package's (currently) limited contributor community. However, these can all be addressed.
We are happy to address all of these points in due time.
Hi all,
apologies for the late reply.
I only have two comments/suggestions:
Thanks and best, David Dr. med. David Mehler, M.D., Ph.D. Head of Applied Computational Neuroscience (ACN) group Lab website: https://acn-lab.owlstown.net/ Department of Psychiatry, Psychotherapy and Psychosomatics Uniklinik RWTH Aachen Germany
On Tue, Mar 19, 2024 at 11:04 AM Florian Krause @.***> wrote:
Thanks for reviewing and for your suggestions and opened issues!
Concerning your concern on substantial scholarly effort: We of course went through the checklist in detail before making the decision to submit to JOSS, and we were confident that we fulfill all of the requirements. You seem to also agree with this in general, but have a slight concern about one point specifically ("Whether the software is sufficiently useful that it is likely to be cited by your peer group.").
I am uncertain about this. The main reason is that ScanSessionTool seems like a useful package (no doubt) developed by and for the authors and their close collaborators. There isn't clear evidence to me that the package has taken hold in the wider open source community or that it will necessarily expand much beyond the existing user group. In my experience, each MRI data collection site or group has their own approach to organising DICOM data (which might be problematic or extremely streamlined/intuitive, or somewhere in between), and there is no standard way of doing this. ScanSessionTool proposes a standard hierarchy (if I understood the documentation correctly?), in essence implying the adoption of a data organization standard and not merely the use of a tool. The adoption of a standard is a much greater challenge than using a tool built around a specific standard that has already been adopted. My suspicion is that groups might choose to not use the tool because they do not want to change their standard approach (because they have status quo bias or changing will affect to many downstream processes or whatever reason).
While it is true that we initially created this software for our own usage (which is probably the case for most research software), we don't see any reason why it shouldn't be useful to others as well. Not only is the scan session documentation and protocol logging part useful in itself (i.e. without the archiving), but I also don't think that the archiving hierarchy we use here has much impact on downstream pipelines, given that most of these pipelines nowadays resolve around the BIDS format, to which our structure can be converted to easily (e.g. using BIDS Coiner, as mentioned in the paper).
When thinking about the perceived lack of current adoption of the tool, there are multiple things to consider:
- Scholars tend to cite software only when they can refer to a journal article (hence this submission).
- Accurately tracking usage in ways other than citations is rather difficult. We don't do any web traffic analytics ourselves, and the potentially best indicator might be the number of downloads from PiPy, which show >20 downloads per month.
- While the package is publicly available on GithHub, we did not put any effort into actively advertising it. A publication will also help with visibility.
It would not be fair to call ScanSessionTool a minor utility package (and I'm not doing that) but some aspects of its current implementation do lean towards that direction. I'm thinking specifically about the 2.4k-line single module, the absence of unit-tests and continuous integration, and the package's (currently) limited contributor community. However, these can all be addressed.
We are happy to address all of these points in due time.
— Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/6126#issuecomment-2006632597, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABZKUPMPUV4USNA5ZQSPVUDYZAEUFAVCNFSM6AAAAABAJXY33GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMBWGYZTENJZG4 . You are receiving this because you were mentioned.Message ID: @.***>
Thanks for the reply @neuroccino!
Thanks for clarifying @fladd!
Happy to endorse the publication.
Florian Krause @.***> schrieb am Di., 19. März 2024, 14:26:
Thanks for the reply @neuroccino https://github.com/neuroccino!
- Good point, we will add a sentence or two to the manuscript to clarify what this special support is about. (It is already explained in detail in the documentation).
- There is a link to "Repository" in the left column in the paper (autogenerated). The content of the "Name" dropdown menu is fully modifiable on a per-project basis, with different sets of names possible for each of the three available "Types" (anat, func, misc). All of this can be customized as part of study-specific templates in a YAML configuration file (see here https://github.com/fladd/ScanSessionTool/blob/master/sst.yaml for an example). (Of course, it is also possible to just type in a name that is not one of the predefined options in the dropdown menu.)
— Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/6126#issuecomment-2007174710, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABZKUPLS4BDGBSUHHRXIJHTYZA4JTAVCNFSM6AAAAABAJXY33GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMBXGE3TINZRGA . You are receiving this because you were mentioned.Message ID: @.***>
@cMadan could you help keep this one going?
@neuroccino thanks for your help here. Would you be able to call @editorialbot generate my checklist
here and formally tick the boxes to complete the review? Thanks!
@neuroccino @cMadan :wave:
@editorialbot remind @cMadan in two weeks
@cMadan doesn't seem to be a reviewer or author for this submission.
@neuroccino thanks for your help here. I see you "endorsed the paper", however, our reviews strictly require reviewers to go through the checklist. Would you please call @editorialbot generate my checklist
in a comment here to generate your checklist and complete it? Thanks!
@Kevin-Mattheus-Moerman thanks, done!
Thanks @neuroccino !
@jsheunis could you please resume your review as well? Thanks!
Thanks for picking this up again @Kevin-Mattheus-Moerman.
I am also happy to report that we, in the meantime, have implemented all changes and suggestions raised by the reviewers. For a full overview of all changes, please see https://github.com/fladd/ScanSessionTool/pull/16 and the corresponding (linked) issues.
Many thanks to the reviewers again for their valuable input!
Sorry everyone for my absence. I've had some on-going health issues and was unable to keep up with this review.
@jsheunis, can you take a look at the recent changes and see if your comments have been addressed? Are you now able to check off more of the reviewer checklist?
Thanks for the ping. I've gone through the list again and ticked off the parts that I could confirm have been addressed. Nice work @fladd.
The only remaining aspect is the testing. I have 2 points:
@jsheunis It seems that the test required Python version 3.12. I rewrote it now, such that it should work with previous versions as well. I just confirmed that it now runs on MacOS with Python 3.10, so please try again. Sorry for the inconvenience.
Regarding a GitHub Action for the tests: I thought about this, too. It could be nice, but the tests require the GUI to be shown while GitHub's CI is headless of course...so this is not straight forward, if possible at all. Given that (1) the new automated tests I implemented now (which are arguably much nicer) can easily be run locally with a single command and (2) a GitHub Action is not required by JOSS (technically, even the manual testing procedure we had before would have fulfilled the requirements), I feel that the work we would need to put into figuring this out would at this point be disproportional to the rather small improvement a GitHub Action would bring over the current situation.
I didn't consider the headless requirement, your argument makes sense. I would add that whether or not an automated CI setup is a requirement from JOSS or not, it's still a good idea in general. So worth keeping in mind for future, but IMO not a dealbreaker for the article itself.
@cMadan @Kevin-Mattheus-Moerman All boxes ticked
If there are no further comments, then I would release a 1.0.0 version that incorporates all the latest changes. (A pre-release of this is already available here: https://github.com/fladd/ScanSessionTool/releases/tag/v1.0.0.dev5).
I went ahead and released the new final version with the latest changes (plus a new application logo). I also updated Figure 1 in the paper with a screenshot showing the actual version.
@cMadan given the two positive reviews, I'll take it from here.
@editorialbot set v1.0.0 as version
Done! version is now v1.0.0
@editorialbot assign me as editor
Assigned! @Kevin-Mattheus-Moerman is now the editor
@editorialbot generate post-review checklist
I'm sorry human, I don't understand that. You can see what commands I support by typing:
@editorialbot commands
@editorialbot set <DOI here> as archive
@editorialbot set <version here> as version
@editorialbot generate pdf
@editorialbot check references
and ask author(s) to update as needed@editorialbot recommend-accept
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@fladd apologies for the delays encountered here so far. It should be smooth sailing from here on.
Please have a look at the above checklist. Can you address these points please. I've just set the version tag to be v1.0.0
, so if that release is the one we'll go with please use that for the Zenodo archive too (be sure to include the v
in the version tag as it should fully match the GitHub release tag).
On the paper:
reproducibilty
-> reproducibility
succesfully
-> successfully
standardized
-> standardised
automatized
-> automatised
but also consider automated
insteadorganize
-> organise
recognized
-> recognised
standardization
-> standardisation
@Kevin-Mattheus-Moerman I archived version 1.0.0 on Zenodo. The DOI is 10.5281/zenodo.12784180.
I also fixed the typos and British English spelling in the paper (see https://github.com/fladd/ScanSessionTool/commit/86a143839a3f55186a6ac532db0c438664864114)
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot set Scan Session Tool 1.0.0 as version
Submitting author: !--author-handle-->@fladd<!--end-author-handle-- (Florian Krause) Repository: https://github.com/fladd/ScanSessionTool Branch with paper.md (empty if default branch): Version: v1.0.0 Editor: !--editor-->@Kevin-Mattheus-Moerman<!--end-editor-- Reviewers: @jsheunis, @neuroccino Archive: 10.5281/zenodo.12784181
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@jsheunis & @neuroccino, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @cMadan know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @jsheunis
📝 Checklist for @neuroccino