Closed editorialbot closed 1 year ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.88 T=15.51 s (46.4 files/s, 35098.9 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
JavaScript 525 70322 35297 199995
CSS 48 103 2476 159243
TypeScript 4 487 12 48783
LESS 81 1472 610 14630
SVG 8 0 2 3056
JSON 20 1 0 1978
Sass 14 34 34 1768
HTML 3 477 30 1712
Markdown 8 487 0 868
Python 4 41 19 268
TeX 1 13 0 68
YAML 1 1 4 22
Jupyter Notebook 1 0 51 21
reStructuredText 1 7 11 7
-------------------------------------------------------------------------------
SUM: 719 73445 38546 432419
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Wordcount for paper.md
is 851
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1109/IJCNN.2017.7966333 is OK
- 10.48550/arXiv.1511.07122 is OK
- 10.48550/arXiv.1901.05350 is OK
- 10.5281/zenodo.6430433 is OK
- 10.1016/j.neuroimage.2020.117012 is OK
MISSING DOIs
- None
INVALID DOIs
- None
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Hi @mwegrzyn, @62442katieb, @richford thanks for agreeing to review. This is our review thread. Please generate your checklist with the above, and raise any issues about the software in the issues tab of the Brainchop repository directly, linking back to here, Any wider issues you'd like to discuss please flag here of course, and let me know if there's anything I can help with. Thanks again!
Dear @samhforbes , thank you for your service, and thanks for respected reviewers for taking the time to review this paper. Would you please point us when to expect your initial decision?
Hi @Mmasoud1 Remember that reviews take time to complete and the initial decision will follow that, as I think Arfon has already discussed with you.
@mwegrzyn, @62442katieb, @richford how are things going? Let me know if I can help.
Dear @samhforbes , according to your last inquiry, I am afraid our respected reviewers are not available.
Hello, apologies for the delay, I was out of office. Taking a look at this later today!
@62442katieb appreciate your time, thank you!
Hi @samhforbes, my apologies for the delayed response. I'll take a look at this tomorrow.
Dear @samhforbes, thank you for the reminder. I will finish reviewing at the end of this week (March 5). Best, Martin
Went through the paper and checklist and I have a few notes: The docs have a lot of information, with snippets of example code throughout, and the README + paper have a nice schematic of how Brainchop works, but it might be helpful to have a start-to-finish example solving a real-world problem.
The paper mentions that being browser-based avoids privacy issues. As it seems the authors intend this tool to be used by clinicians, it would be helpful to more explicitly spell out in the paper and docs how this avoids privacy issues (e.g., nothing is saved to an external server, the tool can be installed and run locally, etc.) as not all clinicians are familiar with the implications of browser-based vs. server-side processing.
I'm testing out Brainchop in Firefox (v109.0.1) on an M1 MacBook Pro and many of the segmentation options won't run, with error messages such as: This model needs texture size of minimum 16384, while current browser supports only 8192
and This option needs a dedicated graphics card
. The system requirements are noted in the wiki, but it would be helpful to present those limitations more clearly in the paper and/or by denoting which segmentation options have specific system requirements on the browser client. Some options suggest I should use Safari instead, which would be helpful information to have up front (esp. as Figure 1 suggests Chrome and Firefox are the browsers of choice). As the authors intend this tool to increase accessibility, making these requirements/limitations more clear in the tool and in the paper would be helpful for users, especially those with limited access to computational resources.
The paper would benefit from explicitly comparing Brainchop to existing automated segmentation tools, in addressing the state of the field. The authors make it clear that the main benefit of this tool over others is its lack of installation requirements and ease of use, but that point would be more compelling if the authors compared it to the installation/usability of the tools it aims to supersede.
Very cool tool, overall! I think a few tweaks and it's good to go!
@emdupre, I went through the paper and source code. Overall, it is very impressive work. I opened a few issues on the source repository related to UX and the paper itself. They are also linked in this thread.
I cannot verify the performance claims in the documentation because I don't have that exact hardware. But they are very plausible given the performance I saw on my M1 MacBook.
@62442katieb also brought up some good point that I'd like to echo. Namely,
I enjoyed reading the paper and test driving the tool. With a few minor modifications, I will give this submission an enthusiastic approval!
Many thanks to our respected reviewers for their valuable feedback. We work actively to address their comments and respond to the opening issues.
Thanks both for your reviews! @Mmasoud1 do update us here when these issues have been addressed! @mwegrzyn please let us know your thoughts as well when you've had a chance to look.
Dear @Mmasoud1,
thank you for your work on brainchop. The following are some comments I have regarding the package and the manuscript:
[ ] I was wondering how well the different brain extraction algorithms perform and how they were pre-trained. More information on this would be helpful. In the acknowledgements I read “The authors would like to thank Kevin Wang and Alex Fedorov for discussions and pre-trained Meshnet models.”. Could you elaborate further? Thank you
[ ] I was not able to use all Segmentation Options of the tool, due to my insufficient hardware and/or software setup, but when using “Extract the Brain (FAST)” on one of my own images (https://openneuro.org/datasets/ds001419/versions/1.0.1/file-display/sub-01:anat:sub-01_T1w.nii.gz) I was not satisfied with the result. It contains both false-positives (a substantial part of the skull labelled as brain) and false-negatives (a substantial part of orbitofrontal brain areas not labelled as brain). This is shown below. I am used to imperfect brain extraction, for example from the FSL brain extraction tools, but there I can manually tweak the parameters to get better results. How can these issues be addressed in brainchop? Can I change certain settings to improve results?
[ ] How will the pre-trained models perform for brains with atrophy, lesions or for developing brains?
I hope you find these comments helpful. Please let me know if you have any questions regarding the points I raised. Thank you for your time and consideration.
Best, Martin Wegrzyn
@mwegrzyn Thank you for your time and valuable comments, we will submit our full responses shortly, hopefully by tomorrow after addressing all the comments, but for now please double check with Mocha comment again since the error doesn't appear with me nor my collaborator and also it wasn't reported by respected reviewers. This is what I have after running the test now :
I may suggest for now to make sure please that WebGL context is not lost and working before retesting for Mocha or try another browser please if you had the same issue again. Thank you.
Dear @Mmasoud1,
thank you for your message. I am looking forward to you revision. I followed the link in the wiki to https://neuroneural.github.io/brainchop/test/runner.html and get the following message (on Debian with Firefox and on Windows with Firefox/Chrome/Edge):
I hope this helps.
Best, Martin Wegrzyn
Hi @mwegrzyn thanks for flagging these, and I know @Mmasoud1 will be keen to address the points you raise. It sounds like a couple of these issues should be raised as issues on the software repo since other users may run into the same sort of problems.
Dear @62442katieb , thank you for your time and valuable comments.
Example usage: We added a showcase example to the doc.
Data privacy statement: In the statement of needs section, we added the suggested phrase in line 28, also we added similar phrase to the wiki home page and brainchop welcome screen.
Performance & Functionality : Despite the high diversity of H/W and S/W resources that exist on the users' side, we did our best to give them feasible recommendations. We did our best to make the end-user be able to run at least one model successfully, but, since S/W and H/W are renewable, and their capabilities improve periodically in terms of cost, speed and computational power, we expect more declining in the arising resources issue over time.
A verified list of S/W and H/W that run successfully with each brainchop model is provided with this link. This link also is mentioned at UI and Wiki to give end-users more information about each model tested resources. We also modified the UI can popup warning messages for the need of a dedicated graphics card or minimum texture size to point end-user for browser resources window showing end-user browser resources and the aforementioned link of verified S/W and H/W for each model.
Aso, we updated the System Requirements page on wiki to reflect more information and recommendations for our end-user.
On our end, GeForce GTX 1050 Ti was able to run all the models. In general, internal or built-in graphics chip (e.g. HD, UHD) can run mostly light models such as: Full Brain GWM (light), Compute Brain Mask (FAST), or Extract the Brain (FAST), but they are not able to run larger models such as Full Brain GWM (large), FS aparc+aseg Atlas 104 (failsafe) or Cortical Atlas 50. We also noted that some users, despite the GPUs that they may have, didn't properly configure it to be used with their browsers ending that their browser can only detect built-in graphics chips on the motherboard, and they didn't notice that. That is why we added the Browser Resources icon on the taskbar to point them out for this issue.
State of the field: In the statement of needs section, we added the state of the field paragraph in lines 29-34.
Brainchop as the first volumetric brain segmentation tool that runs completely on the user side provides a strong proof of concept of the browser capability to process volumetric neuroimaging and satisfies also several important needs such as : Accuracy due to volumetric segmentation, privacy and data residency due to user side processing, easiness-to-use or usability due to zero installation.
Data: we shared our loading sample and also mentioned our showcase open dataset. The end-user can also Reproduce our segmentation results in no time and with few clicks if they meet System Requirements or follow the verified H/W and S/W configuration which is flexible list of multiple H/W and S/W options for each model.
Functionality documentation : In addition to the information about brainchop architecture, pre-trained and inference pipelines, etc, we also added more details about segmentation models , model browsing options, and System Requirements.
Best,
Dear @richford, thank you for your time, valuable comments and opening bug issues.
Add author contributions : Contribution section added lines 74-78.
Add to acknowledgments : Acknowledgments section modified.
Sidebar does not scroll on home page issue 18 : We added a scroll bar to the left mini forms of tool options e951fbd
Segmentation model info popup cuts off content issue 19 : We pushed Fixes for the bug 9f5cae7
State of the field: In the statement of needs section, we added the state of the field paragraph in lines 29-34.
Example usage: We added a show case example to the doc.
Best,
Dear @mwegrzyn, thank you for your time, and valuable comments.
I tried to mediate this stylish conflict between Webix and Papaya by adjusting the padding. This makes MRI planes visible for screen width in range > 1350 and subject also for more future improvements to fit smaller sizes.
Performance : For Verified S/W and H/W please check this link
In our case, GeForce GTX 1050 Ti was able to run all the models.
Internal or built-in graphics chip (e.g. HD, UHD) can run mostly light models such as: Full Brain GWM (light), Compute Brain Mask (FAST), or Extract the Brain (FAST), but in general they are not able to run larger models such as Full Brain GWM (large), FS aparc+aseg Atlas 104 (failsafe) or Cortical Atlas 50.
We tried to make our end-user able to run at least one model successfully. Please note, we are performing volumetric segmentation for the first time in the browser, this is normally done on desktop applications. Taking into consideration also please the improvement in H/W and S/W (i.e. browser) in terms of speed and memory each year, the resources issue will decline with time. In our case we use GeForce GTX 1050 Ti which is an average GPU compared to state of the art GPUs and still it performs very well with all models. However, we noted also that some users, despite the GPUs that they may have, didn't properly configure it to be used with their browsers ending that their browser can only detect built-in graphics chips on the motherboard, and they didn't notice that. That is why we added the Browser Resources icon on the taskbar to point them out for this issue.
I added two deeper Meshnet models ( 11 filters per layer ) for better brain extraction and masking accuracy : Extract the Brain (High Acc) and Compute Brain Mask (High Acc). Both models are tested with Apple M1 and GeForce GTX 980.
In the UI, we provided also end-user with a browser resources window showing end-user browser resources and a link to verified S/W and H/W for each model.
For more info about Meshnet training please refer to this previous work of our lab here. That also referenced in our wiki. We also plan to integrate training pipeline with brainchop repo for advanced users in version 3.0.0.
Although fast brain extraction model (i.e. Extract the Brain (FAST) ) has less accuracy than the aforementioned model, it is more suitable to run on machines with Internal or built-in graphics chip (e.g. HD, UHD) that normally unable to run 11 filters per layer.
State of the field: In the statement of needs section, we added the state of the field paragraph in lines 29-34.
Typos: Suggested typos corrections are applied for Papaya and three.js. Thank you!.
Load tfjs Models: Description section has been added to wiki, also we added to the wiki a brief functionality description of each model currently in use with the UI. These is also accessible within the UI by clicking on the info icon next to the model list.
Mocha test: We fixed it taking into consideration that issue may arise due to slightly different floating point calculations between machines.
Best,
@Mmasoud1, thanks for your revisions.
@emdupre, the authors have responded to all of the issues that I raised and I recommend this submission for publication!
Thank you @richford.
Dear @Mmasoud1, thank you for your response and the thoughtful revision!
Dear @samhforbes, I have now checked all the boxes in my reviewer checklist and recommend the submission for publication.
Best, Martin Wegrzyn
Thank you @mwegrzyn
@62442katieb, do you have any question or comments on our corrections? I will be away for traveling starting next week and I hope you can kindly have a chance to review them before that if possible please.
@62442katieb, do you have any question or comments on our corrections? I will be away for traveling starting next week and I hope you can kindly have a chance to review them before that if possible please.
Looks good to me!
Thank you @62442katieb!
Thanks both for your reviews! @Mmasoud1 do update us here when these issues have been addressed!
@samhforbes, we are done!, we appreciate the dedicated time and helpful comments by all respected reviewers that increased our work quality. Let me know please if I can help with anything next. Thank you!
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1109/IJCNN.2017.7966333 is OK
- 10.48550/arXiv.1511.07122 is OK
- 10.48550/arXiv.1901.05350 is OK
- 10.5281/zenodo.6430433 is OK
- 10.1016/j.neuroimage.2020.117012 is OK
- 10.1087/20150211 is OK
- 10.1002/hbm.460020402 is OK
MISSING DOIs
- None
INVALID DOIs
- https://doi.org/10.1006/nimg.1998.0395 is INVALID because of 'https://doi.org/' prefix
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1109/IJCNN.2017.7966333 is OK
- 10.48550/arXiv.1511.07122 is OK
- 10.48550/arXiv.1901.05350 is OK
- 10.5281/zenodo.6430433 is OK
- 10.1016/j.neuroimage.2020.117012 is OK
- 10.1087/20150211 is OK
- 10.1002/hbm.460020402 is OK
- 10.1006/nimg.1998.0395 is OK
MISSING DOIs
- None
INVALID DOIs
- None
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1109/IJCNN.2017.7966333 is OK
- 10.48550/arXiv.1511.07122 is OK
- 10.48550/arXiv.1901.05350 is OK
- 10.5281/zenodo.6430433 is OK
- 10.1016/j.neuroimage.2020.117012 is OK
- 10.1087/20150211 is OK
- 10.1002/hbm.460020402 is OK
- 10.1006/nimg.1998.0395 is OK
MISSING DOIs
- None
INVALID DOIs
- None
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Thanks @richford for the review you completed, that's really helpful.
Thanks also @mwegrzyn and @62442katieb . @62442katieb Can you please confirm you're happy with the changes made and how they addressed the issues you raised by completing your reviewer checklist?
Dear @samhforbes , please kindly note that @62442katieb had completed her reviewer checklist. I see all of them checked on my end.
Hi @Mmasoud1 thanks for flagging this - I'm not sure what's up with my browser, so I apologise.
Great! Looks like everyone is really pleased, and having had a little play with it, so am I. Can you please check the version number is accurate, confirm the author list, and then archive it somewhere with a stable DOI (Zenodo, or figshare for example) - making sure the authors and title match that of the paper. Then could you please post the DOI here.
Hi @samhforbes , thank you for the suggested changes.
I reviewed the release version number with the latest changes from the review, and checked the paper author list (including ORCIDs and affiliations). Also, I archived the latest version on Zenodo with same title, author list(including ORCIDs) and MIT license :
Version: 2.1.0 DOI: 10.5281/zenodo.7735848
Submitting author: !--author-handle-->@Mmasoud1<!--end-author-handle-- (Mohamed Masoud) Repository: https://github.com/neuroneural/brainchop Branch with paper.md (empty if default branch): Version: v2.1.0 Editor: !--editor-->@samhforbes<!--end-editor-- Reviewers: @mwegrzyn, @62442katieb, @richford Archive: 10.5281/zenodo.7735848
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@mwegrzyn & @62442katieb & @richford, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @samhforbes know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @mwegrzyn
📝 Checklist for @richford
📝 Checklist for @62442katieb