Closed editorialbot closed 3 months ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.88 T=0.75 s (341.3 files/s, 352289.9 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
JavaScript 73 5206 4766 141225
JSON 8 0 0 38595
HTML 23 641 276 25454
CSS 27 3360 376 22457
Python 81 2873 1970 9478
SVG 13 4 4 3095
Jupyter Notebook 2 0 1607 1656
Markdown 8 429 0 1068
YAML 13 67 47 414
TeX 1 10 0 102
XML 4 0 0 55
Bourne Shell 4 4 0 12
-------------------------------------------------------------------------------
SUM: 257 12594 9046 243611
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Wordcount for paper.md
is 1435
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1038/sdata.2016.18 is OK
- 10.3389/fphys.2021.693735 is OK
- 10.1038/s41597-023-02463-x is OK
- 10.12688/f1000research.73492.1 is OK
- 10.1101/2021.02.10.430563 is OK
- 10.1096/fasebj.2020.34.s1.02483 is OK
- 10.12688/f1000research.75071.1 is OK
MISSING DOIs
- None
INVALID DOIs
- None
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
👋🏼 @bvhpatel @mattsouth, @yarikoptic, @abhishektiwari - this is the review thread for the paper. All of our communications will happen here from now on.
For @mattsouth, @yarikoptic and @abhishektiwari - As a reviewer, the first step is to create a checklist for your review by entering
@editorialbot generate my checklist
as the top of a new comment in this thread. Please can you do this soon as a confirmation that you've seen the review starting.
These checklists contain the JOSS requirements. As you go over the submission, please check any items that you feel have been satisfied. The first comment in this thread also contains links to the JOSS reviewer guidelines.
The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. Summary conversation is great on this thread but try to avoid substantial discussion about the repository here, this should take place in issues on the source repository.
When discussing the submission on an issue thread, please mention https://github.com/openjournals/joss-reviews/issues/6140 so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.
We aim for reviews to be completed within about 2-4 weeks. Please let me know if any of you require some more time. We can also use EditorialBot (our bot) to set automatic reminders if you know you'll be away for a known period of time.
Please feel free to ping me (@AJQuinn) if you have any questions/concerns.
@bvhpatel Given the software has hard dependency on Pennsieve platform for most of the features including login, use of OSI approved does not seem applicable here. To me this is more of hosted web view just wrapped in a Desktop app. Can you please clarify data architecture for this app?
@AJQuinn This software is not standalone OSI software. It needs registration, login and other details at a third party website Pennsieve which requires its own T&C to be accepted. What is JOSS guideline here (submission requirements are not explicit about this kind of use cases)?
Thanks for the input @abhishektiwari - I'll clarify this with the editor team and let you know.
In the meantime @bvhpatel - could you clarify the point above? My reading is that the JOSS submission is for SODA which is mainly used for SPARC/Pennsieve at the moment but/will can be extended to other platforms. Is this correct? and is there a way to run stand-alone tests of the platform?
Thanks all
@abhishektiwari @AJQuinn I would not say SODA has a hard dependency on Pennsieve. The main purpose of SODA is to allow researchers to format their research data according to the SPARC Dataset Structure (SDS). If the data is funded by the NIH SPARC Program, they need researchers to upload the data to the Pennsieve platform after it is formatted according to the SDS. Since the development of SODA is funded by SPARC, many of our workflows end with telling users to upload their data to Pennsieve and this is why we make sure to remind them about connecting their Pennsieve account on start. We use the Pennsieve Python client (https://pypi.org/project/pennsieve2/) in the backend of SODA to upload the SDS formatted data on Pennsieve if the user choose to do so. Now that the SDS is progressively being adopted outside of the SPARC Program, we anticipate SODA being used by users not necessarily having a Pennsieve account. There are many features that already allow that (for examples, all features under Free Form Mode --> Prepare Metadata) and all of the remaining workflows that require to upload to Pennsieve (e.g., End to End Curation) will also have the option to just safe the data locally on the user's computer. I hope that clarifies your point.
@AJQuinn will you spur this one on again?
Ive had a look at the software and worked my way through the checklist. There are few points that prevent me from checking off all items, all of which I assume can be resolved.
authorship
Two of the largest contributors to the associated github project (Aaron M and Tram Ngo) arent listed as an author on the paper and the first author doesnt seem to have contributed to the code. What is the contribution of Christopher Marroquin and should the named developers be added as authors?
installation
There are 65 vulnerabilities (inc 2 marked as critical) highlighted by npm on install. There is an existing issue in the github project (#206) regarding updating dependencies that might help resolve this, though its now over a year old. I suggest the authors review and ideally address these vulns. Furthermore it may be an idea to introduce a mechanism for reviewing these in the build process before acceptance.
testing
I cant find any manual or automated tests to run and, similarly, I cant find an example project that I can use to check the functionality of the application. Could the authors describe their testing procedure and would it be possible for the authors to provide an example that can be used to test the functionality provided?
API documentation
There is good documentation for this project that is end-user focussed, but not API focussed. It seems to me that the API consists mainly of endpoints and functions within a python flask app, and comment coverage seems pretty good within the codebase. It may be possible to automatically generate some useful API documentation with a tool like https://github.com/acoomans/flask-autodoc (or similar).
A note on the paper
The paper makes a general statement that I suggest needs to be qualified somewhat:
To our knowledge, SODA is the first researcher-oriented tool for making data FAIR.
As a counter example, Gorgolewski, K., Auer, T., Calhoun, V. et al (2016) - https://doi.org/10.1038/sdata.2016.44 describe a similar workflow to that which SODA supports with their BIDS validator. This paper was published a year before the SPARC project started and is referenced in the SDS paper and I would argue is a research-oriented tool for making data FAIR.
Thanks @AJQuinn - I will revisit the paper and software and provide feedback/comments by end of next week.
Thanks @mattsouth & @abhishektiwari - do let me know if you need further input
Thanks for your comments @mattsouth! Please see our responses below:
Authorship Christopher Marroquin is actually Aaron M. (he goes by both Christopher and Aaron). Tram Ngo left the team about 2-3 years ago and we have no way of reaching out to her to get her agreement to be listed as a co-author (which I understand is required as per JOSS’s authorship guidelines).
Installation The existing issue #206 is rather old (we have now closed it) and some results are no longer applicable however we have looked at the recent results from running npm audit. We have now addressed both 'Critical' level security vulnerabilities and will include these updates into the next release. The 'High' vulnerabilities stem from 'Electron', 'url-regex', and 'xlsx', 'diff', and 'glob-parent'. Here is further information regarding these vulnerabilities:
Overall, most of the remaining vulnerabilities will be addressed around mid-2024 as part of a milestone we have in our current funding from SPARC specifically to do that. As for mechanisms for reviewing vulnerabilities we currently use dependabot and renovate for getting pull requests with dependency updates. However going forward we will likely add an action to run npm audit on PRs and add Snyk integration to create additional PRs for any security issues our other methods may have missed. In addition we also have several code level vulnerability detection tools: Sourcery and Sonarcloud.
Testing Our testing procedure consists of manually checking the functionalities that have been touched in a given new release. We have a milestone as part of our current funding from SPARC for implementing automated testing approaches as manual testing has now become very time consuming with the expansion of the app. Our plan under that milestone is to use the Jest testing framework for testing app-wide web components. Additionally, Postman will be used for end-to-end testing of our Flask backend. We will also leverage Chromatic for UI regression testing of pages. For now, we have added a new page here in our user documentation to provide instructions for testing some features of SODA that work without necessitating access to the Pennsieve platform. You may want to download and use the latest version of SODA (available here) for that.
API documentation To view the API documentation run the application and then enter http://localhost:4242 in your browser to see the Swagger documentation. This has been added to the SODA for SPARC Docs here so this is more clear and available going forward.
Note on the paper By researcher-oriented tool, we mean a tool that guides through all the steps to make data FAIR via intuitive user interfaces and doesn’t require any coding knowledge. To our knowledge SODA is the first to do that as it guides users through all the steps for converting their raw data into SDS-compliant format with all required metadata, validating the dataset, and uploading it on the SPARC mandated repository. At the time we established SODA, creating a BIDS compliant dataset required manual effort (e.g., Step 2 described in the paper required manually creating the BIDS folder structure, etc.). The BIDS validator was indeed a researcher-oriented tool but that was useful only in the final step for validating the dataset and not for creating a FAIR dataset. We have now edited the sentence in the paper to state “To our knowledge, SODA is the first researcher-oriented tool that guides step-by-step through all the requirements for making data FAIR from organizing files, creating metadata, validating, and uploading to a data repository.”
Hi @abhishektiwari - thanks for your time on this review. Can you let us know when you'll be able to finalise a review? This will be useful so that the authors can plan when they'll need to make a response. Please let me know if you need more input from my side
I should be done by this Friday. Apologies for being slow on this.
@bvhpatel First of all great job on putting a curator friendly workflow software and comprehensive documentation for SODA. I think most of functionality works as documented and described in the paper. Just make sure the documentation, particularly images and videos are in sync with latest changes, otherwise images and videos will regress from text version.
Now coming back to software itself I see a lot of Python and JS anti-patterns in the implementation. I created a few GitHub Issues around (1) inconsistent use of HTTP status code, (2) duplicating already declared constants throughout the code base, (3) missing authentication guards for flask_restx APIs, (4) unused imports through out the codebase, (5) outdated and vulnerable NPM packages, etc. Out of these - 3 and 5 really worries me, I would not install a software which is full of vulnerabilities and missing safeguards on access control. Again these are suggested improvements neither of them are blocking at this point.
There are no automated tests - bare minimum API tests and e2e UI tests are desirable. I saw your previous response and plans to add these at a later point and I am fine with it but my concern is that code may not be testable unless you take a major refactoring.
@AJQuinn Thanks for nudge. Completed review from my end. Let me know if any questions. I will wait to hear from authors on the feedback I provided - none of which is blocking.
@AJQuinn please check in here when you can. Once a week at least would be good.
@mattsouth, @yarikoptic Are you able to continue your review at this point? If you still have issues the authors should work on please let us know. Thanks for your help here!!
NB I somewhat lost enthusiasm/got some negative vibe from the project in that my trivial PR to the project was never reviewed/merged despite my multiple pings (now - full of conflicts).
I never found quality time to do the review and will be busy/traveling for next 2 weeks. Do you think you could proceed with two reviewers or I should strive to actually complete the review? ;)
Hi @mattsouth @abhishektiwari @Kevin-Mattheus-Moerman @yarikoptic! We are really sorry for the slow responses. We are working on some of the issues opened by @abhishektiwari that we think are addressable. We are also working on merging @yarikoptic's suggestion (so sorry, this got lost as we got distracted with our milestones and deliverables on this and other projects). We think that some of the remaining recommendations made by @abhishektiwari, especially around safeguard and access control, would require us more time. We do have a milestone to address these sort of things and do believe they need to be addressed but it's getting pushed back due to other priorities. @mattsouth @Kevin-Mattheus-Moerman to be respectful of everyone's time, would it make sense that we address the issues I mentioned are addressable, withdraw the paper, and resubmit when we are done address remaining issues (likely later this year)?
Edit: Easy-to-fix issues reported by @abhishektiwari have been addressed. We have also merged @yarikoptic's PR. Thanks!
I think this question is one for the editors who may have something to say about timelines, but from my perspective I dont see that you need to withdraw and resubmit. For me that would unnecessarily complicate things.
... and I see I still have a couple of things to check based on your responses from Jan 31 - apologies for that - I shall resolve or comment further within the week.
@editorialbot remove editor
Editor removed!
@editorialbot assign @sappelhoff as editor
Assigned! @sappelhoff is now the editor
@bvhpatel @mattsouth, @yarikoptic, @abhishektiwari just to inform you, the new editor for this issue is @sappelhoff please address all queries to him at this point. Thanks!
Hi all, I will be picking this up next week.
@yarikoptic
I never found quality time to do the review and will be busy/traveling for next 2 weeks. Do you think you could proceed with two reviewers or I should strive to actually complete the review? ;)
Given the size of this project, I would very much appreciate if you could complete your review. From @bvhpatel's comment in https://github.com/openjournals/joss-reviews/issues/6140#issuecomment-2078101641I assume that they will try their best to be more responsive to your inquiries and suggestions.
Please let me know whether you can commit to continuing this review or whether we should unassign you 🙏
@abhishektiwari
Can I assume based on your completed reviewer list and your comment in https://github.com/openjournals/joss-reviews/issues/6140#issuecomment-1928723929 that you are recommend this paper for acceptance? If so, please let me know with a short note.
@mattsouth
could you kindly provide a status update of your review for me, now that I have taken over from @AJQuinn
as the new editor? For example:
@bvhpatel
We think that some of the remaining recommendations made by
@abhishektiwari
, especially around safeguard and access control, would require us more time. We do have a milestone to address these sort of things and do believe they need to be addressed but it's getting pushed back due to other priorities.@mattsouth
@Kevin-Mattheus-Moerman
to be respectful of everyone's time, would it make sense that we address the issues I mentioned are addressable, withdraw the paper, and resubmit when we are done address remaining issues (likely later this year)?
From my perspective (and if the other reviewers agree), a withdrawal and resubmission would unnecessarily complicate things, so I agree with @mattsouth in https://github.com/openjournals/joss-reviews/issues/6140#issuecomment-2088534640.
I suggest you address all issues that are reasonably fixable within the next weeks (let's say around 3-4 weeks), and furthermore document (and continue to create) milestones and plans to address all remaining issues within longer time frames. You could then briefly discuss the current lack of these features in the paper, together with a commitment and time table for completion.
I am making this suggestion based on @abhishektiwari's comment:
I will wait to hear from authors on the feedback I provided - none of which is blocking.
and based on @mattsouth's comment: https://github.com/openjournals/joss-reviews/issues/6140#issuecomment-2088534640
If @yarikoptic agrees to review this and recommends against accepting this paper BEFORE the "milestones-that-may-take-longer" are implemented, we will have to extend this review process.
Can I assume based on your completed reviewer list and your comment in https://github.com/openjournals/joss-reviews/issues/6140#issuecomment-1928723929 that you are recommend this paper for acceptance? If so, please let me know with a short note.
I have completed my checklist and I recommend this paper for acceptance. Some of access control issue raised are not blocking as mentioned in my feedback.
@sappelhoff my comment ^^
Thanks @sappelhoff! I believe we have addressed already all the issues reasonably addressable but we will go over the remaining ones again to double check. For those we deem will need more time/support to address, we will mention those in the paper along with a tentative plan for addressing them.
we will go over the remaining ones again to double check. For those we deem will need more time/support to address, we will mention those in the paper along with a tentative plan for addressing them.
@bvhpatel do you have an update on whether you completed this?
@yarikoptic I haven't heard back from you, which I am interpreting as your wish to be unassigned. I will do that right away. Thanks a lot for your efforts up to this point and I hope you will consider reviewing again for JOSS in the future!
@mattsouth could you please reply to my comment above? --> https://github.com/openjournals/joss-reviews/issues/6140#issuecomment-2108655780
@editorialbot remove @yarikoptic from reviewers
@yarikoptic removed from the reviewers list!
@sappelhoff thanks for your patience! We have addressed the addressable issues. I believe others will require more time. I will mention them in the paper (and include a tentative timeline) by the end of next week.
Hi @sappelhoff! There are two main suggestions we cannot address at this moment: adding an authentication guard for the Flask server and removing duplicate constants. As per your suggestion, I have included this paragraph in the paper: "The development of SODA is still on-going to further simplify the process of formatting datasets according to the SDS and publishing them on the SPARC data portal. Additionally, there are several software engineering improvements that need to be made, such as adding an authentication guard for the Flask server and removing duplicate constants. They are anticipated to be completed by the end of the next SODA grant cycle in August 2025.". I hope this works.
I also took the time to track Tram Ngo and added her as an author after receiving her permission (as per @mattsouth comment discussed here)
Thanks!
Thanks @bvhpatel, that looks good to me. I have tried to reach @mattsouth via the contact form on his website as well. We'd definitely need at least one other finished review to proceed with his, so let's see whether he is still willing to finish his review for JOSS :-)
Hi @sappelhoff I have completed my checklist and I recommend this paper for acceptance.
@editorialbot set <DOI here> as archive
@editorialbot set <version here> as version
@editorialbot generate pdf
@editorialbot check references
and ask author(s) to update as needed@editorialbot recommend-accept
Thanks a lot @mattsouth and @abhishektiwari for your reviews! 👏
@bvhpatel I created a "final checklist" for you and me to go through. Could you please have a look at the upper part in here: https://github.com/openjournals/joss-reviews/issues/6140#issuecomment-2214041482 ... and let me know once you are finished.
I will do several checks on my own as well and might comment in this thread if I find something that needs to be changed.
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1038/sdata.2016.18 is OK
- 10.3389/fphys.2021.693735 is OK
- 10.1038/s41597-023-02463-x is OK
- 10.12688/f1000research.73492.1 is OK
- 10.1101/2021.02.10.430563 is OK
- 10.1096/fasebj.2020.34.s1.02483 is OK
- 10.12688/f1000research.75071.1 is OK
MISSING DOIs
- No DOI given, and none found for title: SODA (Software to Organize Data Automatically) for...
- No DOI given, and none found for title: sparc-curation
- No DOI given, and none found for title: FAIRshare: FAIR data and software sharing made eas...
- No DOI given, and none found for title: FAIRhub: Easily manage, curate, and share FAIR and...
INVALID DOIs
- None
Submitting author: !--author-handle-->@bvhpatel<!--end-author-handle-- (Bhavesh Patel) Repository: https://github.com/fairdataihub/SODA-for-SPARC Branch with paper.md (empty if default branch): paper Version: v15.1.0 Editor: !--editor-->@sappelhoff<!--end-editor-- Reviewers: @mattsouth, @abhishektiwari Archive: 10.5281/zenodo.12774826
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@mattsouth & @yarikoptic & @abhishektiwari, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @AJQuinn know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @mattsouth
📝 Checklist for @abhishektiwari