openjournals / joss-reviews

Reviews for the Journal of Open Source Software
Creative Commons Zero v1.0 Universal
714 stars 38 forks source link

[REVIEW]: Scan Session Tool: (f)MRI scan session documentation and archiving #6126

Closed editorialbot closed 2 months ago

editorialbot commented 10 months ago

Submitting author: !--author-handle-->@fladd<!--end-author-handle-- (Florian Krause) Repository: https://github.com/fladd/ScanSessionTool Branch with paper.md (empty if default branch): Version: v1.0.0 Editor: !--editor-->@Kevin-Mattheus-Moerman<!--end-editor-- Reviewers: @jsheunis, @neuroccino Archive: 10.5281/zenodo.12784181

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/63221c937df96399499f3a837d3e3a94"><img src="https://joss.theoj.org/papers/63221c937df96399499f3a837d3e3a94/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/63221c937df96399499f3a837d3e3a94/status.svg)](https://joss.theoj.org/papers/63221c937df96399499f3a837d3e3a94)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@jsheunis & @neuroccino, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @cMadan know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @jsheunis

📝 Checklist for @neuroccino

editorialbot commented 10 months ago

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf
editorialbot commented 10 months ago
Software report:

github.com/AlDanial/cloc v 1.88  T=0.07 s (204.5 files/s, 56018.4 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                           4            271            283           1888
CSS                              3            139            174            472
HTML                             1             60              0            267
TeX                              1             19              0            197
Markdown                         3             41              0            164
YAML                             2             27              4            101
JSON                             1              0              0              1
-------------------------------------------------------------------------------
SUM:                            15            557            461           3090
-------------------------------------------------------------------------------

gitinspector failed to run statistical information for the repository
editorialbot commented 10 months ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1038/ncomms14748 is OK
- 10.1038/nn.4500 is OK
- 10.1371/journal.pone.0200562 is OK
- 10.1002/jmri.23572 is OK
- 10.1016/j.dcn.2020.100803 is OK
- 10.1080/2326263X.2017.1307096 is OK
- 10.1002/hbm.24683 is OK
- 10.1016/j.neuroimage.2021.118527 is OK
- 10.1016/j.neuroimage.2019.03.046 is OK
- 10.34973/cwja-hc66 is OK
- 10.3389/fninf.2021.770608 is OK
- 10.1038/sdata.2016.44 is OK

MISSING DOIs

- None

INVALID DOIs

- None
editorialbot commented 10 months ago

Wordcount for paper.md is 515

editorialbot commented 10 months ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

cMadan commented 10 months ago

@jsheunis @neuroccino, please start your reviews here when you have a chance (see first comment in this issue about creating your checklist) or otherwise let me know if you have any questions!

Kevin-Mattheus-Moerman commented 8 months ago

@jsheunis @neuroccino can you get started please? To do so just call @editorialbot generate my checklist

jsheunis commented 8 months ago

Review checklist for @jsheunis

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

Kevin-Mattheus-Moerman commented 7 months ago

@cMadan please check in here and try to spur things along.

cMadan commented 7 months ago

@neuroccino, it would be great if you could start your review checklist. If you have any questions before you get started, do let me know.

@jsheunis, how are things going? It would be great if you could continue with this soon.

jsheunis commented 6 months ago

Sorry for my long delay with this. Life...

I'm busy reviewing now.

jsheunis commented 6 months ago

I went through all checklist items and created relevant issues. I can check off the remaining open boxes once those issues are addressed. There's one remaining point that I would like to comment on, and that is the question of "Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines"

I find it difficult to pass clear judgement on this item, i.e. to give a YES/NO answer. Looking at the specific items mentioned in the JOSS guidelines:

Age of software (is this a well-established software project) / length of commit history

This is fine, the tool is several years in development

Number of commits.

No problem here.

Number of authors.

Might be slightly concerning because it's two authors, but actually is not a problem in and of itself.

Total lines of code (LOC). Submissions under 1000 LOC will usually be flagged, those under 300 LOC will be desk rejected.

No specific reason for concern

Whether the software has already been cited in academic papers.

It has, but by the authors themselves in their other work.

Whether the software is sufficiently useful that it is likely to be cited by your peer group.

I am uncertain about this. The main reason is that ScanSessionTool seems like a useful package (no doubt) developed by and for the authors and their close collaborators. There isn't clear evidence to me that the package has taken hold in the wider open source community or that it will necessarily expand much beyond the existing user group. In my experience, each MRI data collection site or group has their own approach to organising DICOM data (which might be problematic or extremely streamlined/intuitive, or somewhere in between), and there is no standard way of doing this. ScanSessionTool proposes a standard hierarchy (if I understood the documentation correctly?), in essence implying the adoption of a data organization standard and not merely the use of a tool. The adoption of a standard is a much greater challenge than using a tool built around a specific standard that has already been adopted. My suspicion is that groups might choose to not use the tool because they do not want to change their standard approach (because they have status quo bias or changing will affect to many downstream processes or whatever reason).

My viewpoint is also somewhat related to the JOSS point that

software should be feature-complete (i.e., no half-baked solutions), packaged appropriately according to common community standards [...], and designed for maintainable extension. “Minor utility” packages, including “thin” API clients, and single-function packages are not acceptable.

It would not be fair to call ScanSessionTool a minor utility package (and I'm not doing that) but some aspects of its current implementation do lean towards that direction. I'm thinking specifically about the 2.4k-line single module, the absence of unit-tests and continuous integration, and the package's (currently) limited contributor community. However, these can all be addressed.

I'm happy to receive arguments/evidence against my point of view. In any event, I don't think my being on the fence on this particular point should constitute any form of rejection (in the traditional journal-centric sense of the word). IMO this is a useful package that people should know about and can benefit from. I would appreciate thoughts on this from other the reviewer(s), the editor, and authors.

fladd commented 6 months ago

Thanks for reviewing and for your suggestions and opened issues!

Concerning your concern on substantial scholarly effort: We of course went through the checklist in detail before making the decision to submit to JOSS, and we were confident that we fulfill all of the requirements. You seem to also agree with this in general, but have a slight concern about one point specifically ("Whether the software is sufficiently useful that it is likely to be cited by your peer group.").

I am uncertain about this. The main reason is that ScanSessionTool seems like a useful package (no doubt) developed by and for the authors and their close collaborators. There isn't clear evidence to me that the package has taken hold in the wider open source community or that it will necessarily expand much beyond the existing user group. In my experience, each MRI data collection site or group has their own approach to organising DICOM data (which might be problematic or extremely streamlined/intuitive, or somewhere in between), and there is no standard way of doing this. ScanSessionTool proposes a standard hierarchy (if I understood the documentation correctly?), in essence implying the adoption of a data organization standard and not merely the use of a tool. The adoption of a standard is a much greater challenge than using a tool built around a specific standard that has already been adopted. My suspicion is that groups might choose to not use the tool because they do not want to change their standard approach (because they have status quo bias or changing will affect to many downstream processes or whatever reason).

While it is true that we initially created this software for our own usage (which is probably the case for most research software), we don't see any reason why it shouldn't be useful to others as well. Not only is the scan session documentation and protocol logging part useful in itself (i.e. without the archiving), but I also don't think that the archiving hierarchy we use here has much impact on downstream pipelines, given that most of these pipelines nowadays resolve around the BIDS format, to which our structure can be converted to easily (e.g. using BIDS Coiner, as mentioned in the paper).

When thinking about the perceived lack of current adoption of the tool, there are multiple things to consider:

  1. Scholars tend to cite software only when they can refer to a journal article (hence this submission).
  2. Accurately tracking usage in ways other than citations is rather difficult. We don't do any web traffic analytics ourselves, and the potentially best indicator might be the number of downloads from PiPy, which show >20 downloads per month.
  3. While the package is publicly available on GithHub, we did not put any effort into actively advertising it. A publication will also help with visibility.

It would not be fair to call ScanSessionTool a minor utility package (and I'm not doing that) but some aspects of its current implementation do lean towards that direction. I'm thinking specifically about the 2.4k-line single module, the absence of unit-tests and continuous integration, and the package's (currently) limited contributor community. However, these can all be addressed.

We are happy to address all of these points in due time.

neuroccino commented 6 months ago

Hi all,

apologies for the late reply.

I only have two comments/suggestions:

  1. The mentioned "special support" for TBV requires further explanation in the manuscript.
  2. Maybe I missed it, but I could not find a link to the repository in the manuscript. I wanted to install the latest version to test it out re. practability. One thing I would like to see is how modifiable the drop-down menu for "name" is, and which types of sequences are covered under "type" (and whether this is also customizable). Is it possible to save a set of sequences as a template (e.g. to simply load it for a respective study without having to add individual sequences individually?).

Thanks and best, David Dr. med. David Mehler, M.D., Ph.D. Head of Applied Computational Neuroscience (ACN) group Lab website: https://acn-lab.owlstown.net/ Department of Psychiatry, Psychotherapy and Psychosomatics Uniklinik RWTH Aachen Germany

On Tue, Mar 19, 2024 at 11:04 AM Florian Krause @.***> wrote:

Thanks for reviewing and for your suggestions and opened issues!

Concerning your concern on substantial scholarly effort: We of course went through the checklist in detail before making the decision to submit to JOSS, and we were confident that we fulfill all of the requirements. You seem to also agree with this in general, but have a slight concern about one point specifically ("Whether the software is sufficiently useful that it is likely to be cited by your peer group.").

I am uncertain about this. The main reason is that ScanSessionTool seems like a useful package (no doubt) developed by and for the authors and their close collaborators. There isn't clear evidence to me that the package has taken hold in the wider open source community or that it will necessarily expand much beyond the existing user group. In my experience, each MRI data collection site or group has their own approach to organising DICOM data (which might be problematic or extremely streamlined/intuitive, or somewhere in between), and there is no standard way of doing this. ScanSessionTool proposes a standard hierarchy (if I understood the documentation correctly?), in essence implying the adoption of a data organization standard and not merely the use of a tool. The adoption of a standard is a much greater challenge than using a tool built around a specific standard that has already been adopted. My suspicion is that groups might choose to not use the tool because they do not want to change their standard approach (because they have status quo bias or changing will affect to many downstream processes or whatever reason).

While it is true that we initially created this software for our own usage (which is probably the case for most research software), we don't see any reason why it shouldn't be useful to others as well. Not only is the scan session documentation and protocol logging part useful in itself (i.e. without the archiving), but I also don't think that the archiving hierarchy we use here has much impact on downstream pipelines, given that most of these pipelines nowadays resolve around the BIDS format, to which our structure can be converted to easily (e.g. using BIDS Coiner, as mentioned in the paper).

When thinking about the perceived lack of current adoption of the tool, there are multiple things to consider:

  1. Scholars tend to cite software only when they can refer to a journal article (hence this submission).
  2. Accurately tracking usage in ways other than citations is rather difficult. We don't do any web traffic analytics ourselves, and the potentially best indicator might be the number of downloads from PiPy, which show >20 downloads per month.
  3. While the package is publicly available on GithHub, we did not put any effort into actively advertising it. A publication will also help with visibility.

It would not be fair to call ScanSessionTool a minor utility package (and I'm not doing that) but some aspects of its current implementation do lean towards that direction. I'm thinking specifically about the 2.4k-line single module, the absence of unit-tests and continuous integration, and the package's (currently) limited contributor community. However, these can all be addressed.

We are happy to address all of these points in due time.

— Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/6126#issuecomment-2006632597, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABZKUPMPUV4USNA5ZQSPVUDYZAEUFAVCNFSM6AAAAABAJXY33GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMBWGYZTENJZG4 . You are receiving this because you were mentioned.Message ID: @.***>

fladd commented 6 months ago

Thanks for the reply @neuroccino!

  1. Good point, we will add a sentence or two to the manuscript to clarify what this special support is about. (It is already explained in detail in the documentation).
  2. There is a link to "Repository" in the left column in the paper (autogenerated). The content of the "Name" dropdown menu is fully modifiable on a per-project basis, with different sets of names possible for each of the three available "Types" (anat, func, misc). All of this can be customized as part of study-specific templates in a YAML configuration file (see here for an example). (Of course, it is also possible to just type in a name that is not one of the predefined options in the dropdown menu.)
neuroccino commented 6 months ago

Thanks for clarifying @fladd!

Happy to endorse the publication.

Florian Krause @.***> schrieb am Di., 19. März 2024, 14:26:

Thanks for the reply @neuroccino https://github.com/neuroccino!

  1. Good point, we will add a sentence or two to the manuscript to clarify what this special support is about. (It is already explained in detail in the documentation).
  2. There is a link to "Repository" in the left column in the paper (autogenerated). The content of the "Name" dropdown menu is fully modifiable on a per-project basis, with different sets of names possible for each of the three available "Types" (anat, func, misc). All of this can be customized as part of study-specific templates in a YAML configuration file (see here https://github.com/fladd/ScanSessionTool/blob/master/sst.yaml for an example). (Of course, it is also possible to just type in a name that is not one of the predefined options in the dropdown menu.)

— Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/6126#issuecomment-2007174710, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABZKUPLS4BDGBSUHHRXIJHTYZA4JTAVCNFSM6AAAAABAJXY33GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMBXGE3TINZRGA . You are receiving this because you were mentioned.Message ID: @.***>

Kevin-Mattheus-Moerman commented 5 months ago

@cMadan could you help keep this one going?

Kevin-Mattheus-Moerman commented 5 months ago

@neuroccino thanks for your help here. Would you be able to call @editorialbot generate my checklist here and formally tick the boxes to complete the review? Thanks!

Kevin-Mattheus-Moerman commented 4 months ago

@neuroccino @cMadan :wave:

Kevin-Mattheus-Moerman commented 4 months ago

@editorialbot remind @cMadan in two weeks

editorialbot commented 4 months ago

@cMadan doesn't seem to be a reviewer or author for this submission.

Kevin-Mattheus-Moerman commented 4 months ago

@neuroccino thanks for your help here. I see you "endorsed the paper", however, our reviews strictly require reviewers to go through the checklist. Would you please call @editorialbot generate my checklist in a comment here to generate your checklist and complete it? Thanks!

neuroccino commented 4 months ago

Review checklist for @neuroccino

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

neuroccino commented 4 months ago

@Kevin-Mattheus-Moerman thanks, done!

Kevin-Mattheus-Moerman commented 4 months ago

Thanks @neuroccino !

Kevin-Mattheus-Moerman commented 4 months ago

@jsheunis could you please resume your review as well? Thanks!

fladd commented 4 months ago

Thanks for picking this up again @Kevin-Mattheus-Moerman.

I am also happy to report that we, in the meantime, have implemented all changes and suggestions raised by the reviewers. For a full overview of all changes, please see https://github.com/fladd/ScanSessionTool/pull/16 and the corresponding (linked) issues.

Many thanks to the reviewers again for their valuable input!

cMadan commented 4 months ago

Sorry everyone for my absence. I've had some on-going health issues and was unable to keep up with this review.

cMadan commented 4 months ago

@jsheunis, can you take a look at the recent changes and see if your comments have been addressed? Are you now able to check off more of the reviewer checklist?

jsheunis commented 4 months ago

Thanks for the ping. I've gone through the list again and ticked off the parts that I could confirm have been addressed. Nice work @fladd.

The only remaining aspect is the testing. I have 2 points:

fladd commented 4 months ago

@jsheunis It seems that the test required Python version 3.12. I rewrote it now, such that it should work with previous versions as well. I just confirmed that it now runs on MacOS with Python 3.10, so please try again. Sorry for the inconvenience.

Regarding a GitHub Action for the tests: I thought about this, too. It could be nice, but the tests require the GUI to be shown while GitHub's CI is headless of course...so this is not straight forward, if possible at all. Given that (1) the new automated tests I implemented now (which are arguably much nicer) can easily be run locally with a single command and (2) a GitHub Action is not required by JOSS (technically, even the manual testing procedure we had before would have fulfilled the requirements), I feel that the work we would need to put into figuring this out would at this point be disproportional to the rather small improvement a GitHub Action would bring over the current situation.

jsheunis commented 3 months ago

I didn't consider the headless requirement, your argument makes sense. I would add that whether or not an automated CI setup is a requirement from JOSS or not, it's still a good idea in general. So worth keeping in mind for future, but IMO not a dealbreaker for the article itself.

jsheunis commented 3 months ago

@cMadan @Kevin-Mattheus-Moerman All boxes ticked

fladd commented 3 months ago

If there are no further comments, then I would release a 1.0.0 version that incorporates all the latest changes. (A pre-release of this is already available here: https://github.com/fladd/ScanSessionTool/releases/tag/v1.0.0.dev5).

fladd commented 3 months ago

I went ahead and released the new final version with the latest changes (plus a new application logo). I also updated Figure 1 in the paper with a screenshot showing the actual version.

Kevin-Mattheus-Moerman commented 2 months ago

@cMadan given the two positive reviews, I'll take it from here.

Kevin-Mattheus-Moerman commented 2 months ago

@editorialbot set v1.0.0 as version

editorialbot commented 2 months ago

Done! version is now v1.0.0

Kevin-Mattheus-Moerman commented 2 months ago

@editorialbot assign me as editor

editorialbot commented 2 months ago

Assigned! @Kevin-Mattheus-Moerman is now the editor

Kevin-Mattheus-Moerman commented 2 months ago

@editorialbot generate post-review checklist

editorialbot commented 2 months ago

I'm sorry human, I don't understand that. You can see what commands I support by typing:

@editorialbot commands

Kevin-Mattheus-Moerman commented 2 months ago

Post-Review Checklist for Editor and Authors

Additional Author Tasks After Review is Complete

Editor Tasks Prior to Acceptance

Kevin-Mattheus-Moerman commented 2 months ago

@editorialbot generate pdf

editorialbot commented 2 months ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

Kevin-Mattheus-Moerman commented 2 months ago

@fladd apologies for the delays encountered here so far. It should be smooth sailing from here on. Please have a look at the above checklist. Can you address these points please. I've just set the version tag to be v1.0.0, so if that release is the one we'll go with please use that for the Zenodo archive too (be sure to include the v in the version tag as it should fully match the GitHub release tag).

On the paper:

fladd commented 2 months ago

@Kevin-Mattheus-Moerman I archived version 1.0.0 on Zenodo. The DOI is 10.5281/zenodo.12784180.

I also fixed the typos and British English spelling in the paper (see https://github.com/fladd/ScanSessionTool/commit/86a143839a3f55186a6ac532db0c438664864114)

Kevin-Mattheus-Moerman commented 2 months ago

@editorialbot generate pdf

editorialbot commented 2 months ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

Kevin-Mattheus-Moerman commented 2 months ago

@editorialbot set Scan Session Tool 1.0.0 as version