Closed whedon closed 1 year ago
We thank @travisbthomp for the review and the new suggestions. Please find below our answer to the missing points to be resolved :
The Overview page of the documentation clearly states two goals for Frites. No target audience is stated in the documentation.
We included the target audience in the first page of the documentation.
Resolved (with a suggestion) I have marked this as resolved as the authors have included enough examples (in the examples section) to meet the very barest of minimal requirements as the examples do contain the API objects if one digs through them. I do have a suggestion, though. In the "API" section of the documentation you list several API objects/items (Dataset, Workflows, Connectivity, etc). Clicking on these items (e.g. the frites.workflow link) takes me to a separate page. It would be useful to, at the very least, include links to the examples that use each cited API (e.g. WfMi, WfMiCombine, WfConnComod, etc) object on this landing page. In this way, the users can at least see a demonstration of these various API and get some minimal context for how they are used. The API documentation pages are currently as sparse as possible while still being minimally feasible for publication. If the authors want others to actually use their software, post-publication, the API documentation could be made more verbose. This is especially true given the author's explicit claim to be targeting "researchers and students with little programming knowledge".
We thank the reviewer for suggesting adding examples following object description in the API. However, almost all of the objects, methods and functions described in the API section have links to examples illustrating it. For example, clicking on the WfMi object, the user can then see the methods of this object like WfMi.fit and examples using this method are listed just after the description or on the top-right part of the page. Finally, all of the examples that are using the WfMi are listed at the bottom of the page. We believe that these links already respond to the issue raised by the reviewer.
Are there automated tests or manual steps described so that the functionality of the software can be verified?
The Frites documentation currently ships with several examples. As far as I can see, these currently serve as the "automated tests" that allow for verification of the software functionality. The "Goals" page lists two goals of Frites: (A) Extract task-related cognitive brain networks and (B) Group-level statistical inferences. As a reviewer, I expect the main goals for the functionality to have verifiable examples / resources available. One can argue that (B) is verifiable as there are seven examples of statistical tests in the "Examples" section (for two variables only). However, I cannot find anywhere where a task-related cognitive brain network is extracted / produced. I do see that the authors have included "Autoregressive Models" examples. I looked through these and they do show how Granger causality can be used to infer connectivity but they use synthetic networks a priori. From the documentation goals, one would expect a set of a-priori signal data to be used to create a graph from the GC analysis. I suspect that this would be one of the main tasks that a user of Frites would want to carry out.
To Resolve: To resolve this issue, a clear verification of the author's stated goal (A) seems to be needed. One thing that I can think of would be to create and release a data set (using, e.g. StimSpecAR) of node-wise signals, reflecting say an ~8-10 node graph, and create and example that imports the nodal stimulus data set and then re-creates the connectivity and outputs the weighted graph (or correlation matrix or both) from those signals. The authors may have a better idea.
The examples are not the only source of testing. Concerning automatic testing, we wrote unit tests (smoke tests and functional ones) to test Frites’ functionalities on Python versions 3.7, 3.8 and 3.9. The test are runned on Travis and also using Github actions. As the reviewer knows, those tests are launched at every commit. 84% of the code lines are covered by the tests. Note that we are not only testing the functionalities of Frites, but also the coding quality. Taken together, we believe that the question of the test should be resolved as automated tests are included.
Regarding the issue raised by the reviewer concerning the use of real brain datasets as benchmark and example scripts, we believe it is not the correct approach, because brain data cannot be taken as a “ground truth” for testing. Unfortunately, there is no such thing as an example dataset for a task-related cognitive brain network, as far as we are aware of. Nevertheless, we are open to any suggestions from the reviewer. Therefore, the only possibility for testing the code is to use simulations of FC networks with an underlying causal graph, using for example AR models. As pointed out by the reviewer, we have already implemented several examples using AR models showing the functionality and testing of Frites.
To resolve: Add an explicit mention of this item in line with the review checklist requirements.
We thank the reviewer for pointing toward the lack of instructions for developers. We added a page to the documentation entitled “Report, contribute and get help” describing how developers can submit pull request, how to report issues with the software and finally, how and where to get help from us. We also included templates for pull request and templates for issues (i.e. bug report, improvements of the documentation and feature request)
To resolve: please include "a clear description of the high-level functionality and purpose of the software" in the summary section as requested by the checklist.
Regarding the lack of description of the “high-level functionality”, we followed the example provided on JOSS’s website where the “Summary” section introduces the context of the research and the functionalities of the software are further described in the “Statement of need” section.
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Dear @meg-simula, @travisbthomp, @madvn, could we have an update on the review status?
@meg-simula I've checked the remaining boxes.
Thank you @travisbthomp for all of the comments. The documentation and the clarity of the manuscript improved a lot. We will wait for @madvn and @meg-simula feedbacks.
Thank you very much @EtienneCmb and @travisbthomp for your feedback and responses, your work is much appreciated!
@madvn We are now waiting for the finalization of your review. Could you let us know when you would have a chance to take a look at the revised package++?
Dear @meg-simula,
We don't have news from @madvn. I also tried to reach him by email without success. To summarize, he proposed several modifications and we modified the manuscript accordingly. How can we finalize the revisions?
Thank you in advance,
Hi everyone, so sorry about the delay. I'm in vacation this week and will be back on Monday. I'll get to this Monday evening. Hope that's not too late.
On Thu, Oct 13, 2022 at 10:09 AM Etienne Combrisson < @.***> wrote:
Dear @meg-simula https://github.com/meg-simula,
We don't have news from @madvn https://github.com/madvn. I also tried to reach him by email without success. To summarize, he proposed several modifications https://github.com/openjournals/joss-reviews/issues/3842#issuecomment-980836301 and we modified the manuscript accordingly https://github.com/openjournals/joss-reviews/issues/3842#issuecomment-1181985736. How can we finalize the revisions?
Thank you in advance,
— Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/3842#issuecomment-1277676481, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAZITK56FREAVEXFTSDEJXTWDAJZNANCNFSM5GQCTX3A . You are receiving this because you were mentioned.Message ID: @.***>
No problem, welcome back.
@EtienneCmb It is inappropriate for you to contact a reviewer of your submission privately. Please, do not do this again.
Dear @kthyng and @meg-simula. Sorry to insist again, but could we have an update on the review please? It's been 20 days since the last messages.
Best,
Apologies for the delay. I'll look at it today.
On Wed, Nov 2, 2022 at 5:29 AM Etienne Combrisson @.***> wrote:
Dear @kthyng https://github.com/kthyng and @meg-simula https://github.com/meg-simula. Sorry to insist again, but could we have an update on the review please? It's been 20 days since the last messages.
Best,
— Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/3842#issuecomment-1299933753, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAZITKYB2FLDANIIW4ICUWTWGIYAJANCNFSM5GQCTX3A . You are receiving this because you were mentioned.Message ID: @.***>
Thank you for the responses and changes with regards to my comments, @EtienneCmb. All items I have noted have been resolved here.
I have been able to run some of the examples on my Ubuntu machine with python 3.6. The examples and documentation look great!
Looks like some tests are failing on the master branch. The travis build seems to be successful though. Could you please resolve that?
I've checked all boxes except the one on automated testing to capture this last item. Otherwise looks great to me!
Thanks for the patience everyone and sorry again about the delay.
Hi @madvn,
Thank you for your response,
Looks like some tests are failing on the master branch. The travis build seems to be successful though. Could you please resolve that?
Yes, we made some changes to connectivity metrics. I solved the testing issue related to this.
Thank you,
Perfect, checks all boxes. LGTM @meg-simula
Great! I'll revisit by the end of the week.
-- Marie
On Wed, 2 Nov 2022, 12:06 Madhavun, @.***> wrote:
Perfect, checks all boxes. LGTM @meg-simula https://github.com/meg-simula
— Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/3842#issuecomment-1301091404, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFPESS3H7X7ZFFNFAUTPHDWGK3R5ANCNFSM5GQCTX3A . You are receiving this because you were mentioned.Message ID: @.***>
@editorialbot check references
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1002/hbm.23471 is OK
- 10.3389/fnins.2013.00267 is OK
- 10.1145/2833157.2833162 is OK
- 10.1371/journal.pcbi.1008302 is OK
- 10.1016/j.neuroimage.2022.119347 is OK
- 10.21105/joss.01081 is OK
- 10.1523/JNEUROSCI.1672-16.2016 is OK
- 10.1016/j.jneumeth.2007.03.024 is OK
MISSING DOIs
- 10.5334/jors.148 may be a valid DOI for title: xarray: ND labeled arrays and datasets in Python
- 10.7551/mitpress/11442.003.0080 may be a valid DOI for title: Functional connectivity and neuronal dynamics: insights from computational methods
- 10.1523/jneurosci.4892-14.2015 may be a valid DOI for title: Characterization of Cortical Networks and Corticocortical Functional Connectivity Mediating Arbitrary Visuomotor Mapping
- 10.1016/j.neuroimage.2009.10.003 may be a valid DOI for title: Complex network measures of brain connectivity: uses and interpretations
- 10.21105/joss.01609 may be a valid DOI for title: infotheory: A c++/python package for multivariate information theoretic analysis
INVALID DOIs
- None
@EtienneCmb I've read through the revised manuscript and find it very accessible, readable and interesting. Thank you for all of your work with the revisions, and @travisbthomp @madvn for the reviews.
@EtienneCmb Could you comment on the references missing DOIs above? Are these expected or addressable?
@EtienneCmb I've read through the revised manuscript and find it very accessible, readable and interesting. Thank you for all of your work with the revisions, and @travisbthomp @madvn for the reviews.
Thank you @meg-simula, indeed, the comments of the reviewers helped a lot in clarifying the manuscript.
@EtienneCmb Could you comment on the references missing DOIs above? Are these expected or addressable?
I checked all 5 doi are correct. Please find below the hyperlinks to the papers if you want to double check:
Otherwise, should I do something else to check?
@editorialbot check references
@editorialbot generate pdf
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1002/hbm.23471 is OK
- 10.3389/fnins.2013.00267 is OK
- 10.1145/2833157.2833162 is OK
- 10.1371/journal.pcbi.1008302 is OK
- 10.1016/j.neuroimage.2022.119347 is OK
- 10.21105/joss.01081 is OK
- 10.1523/JNEUROSCI.1672-16.2016 is OK
- 10.1016/j.jneumeth.2007.03.024 is OK
MISSING DOIs
- 10.5334/jors.148 may be a valid DOI for title: xarray: ND labeled arrays and datasets in Python
- 10.7551/mitpress/11442.003.0080 may be a valid DOI for title: Functional connectivity and neuronal dynamics: insights from computational methods
- 10.1523/jneurosci.4892-14.2015 may be a valid DOI for title: Characterization of Cortical Networks and Corticocortical Functional Connectivity Mediating Arbitrary Visuomotor Mapping
- 10.1016/j.neuroimage.2009.10.003 may be a valid DOI for title: Complex network measures of brain connectivity: uses and interpretations
- 10.21105/joss.01609 may be a valid DOI for title: infotheory: A c++/python package for multivariate information theoretic analysis
INVALID DOIs
- None
@EtienneCmb Thanks for checking - could you please also add the DOis to the paper references? (So that they are not missing DOIs anymore.)
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@meg-simula ah yes, exact, doi are missing in the bibtex file. I update right now
@editorialbot check references
@editorialbot generate pdf
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1002/hbm.23471 is OK
- 10.3389/fnins.2013.00267 is OK
- 10.5334/jors.148 is OK
- 10.1145/2833157.2833162 is OK
- 10.7551/mitpress/11442.001.0001 is OK
- 10.1523/jneurosci.4892-14.2015 is OK
- 10.1371/journal.pcbi.1008302 is OK
- 10.1016/j.neuroimage.2022.119347 is OK
- 10.21105/joss.01081 is OK
- 10.1523/JNEUROSCI.1672-16.2016 is OK
- 10.1016/j.jneumeth.2007.03.024 is OK
- 10.1016/j.neuroimage.2009.10.003 is OK
- 10.21105/joss.01609 is OK
MISSING DOIs
- None
INVALID DOIs
- None
@meg-simula I think it's good now
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Very nice, thank you @EtienneCmb.
At this point could you:
I can then move forward with recommending acceptance of the submission.
@meg-simula
@editorialbot set 10.5281/zenodo.7278278 as archive
Done! Archive is now 10.5281/zenodo.7278278
@editorialbot set v0.4.3 as version
Done! version is now v0.4.3
@editorialbot recommend-accept
Attempting dry run of processing paper acceptance...
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1002/hbm.23471 is OK
- 10.3389/fnins.2013.00267 is OK
- 10.5334/jors.148 is OK
- 10.1145/2833157.2833162 is OK
- 10.7551/mitpress/11442.001.0001 is OK
- 10.1523/jneurosci.4892-14.2015 is OK
- 10.1371/journal.pcbi.1008302 is OK
- 10.1016/j.neuroimage.2022.119347 is OK
- 10.21105/joss.01081 is OK
- 10.1523/JNEUROSCI.1672-16.2016 is OK
- 10.1016/j.jneumeth.2007.03.024 is OK
- 10.1016/j.neuroimage.2009.10.003 is OK
- 10.21105/joss.01609 is OK
MISSING DOIs
- None
INVALID DOIs
- None
:wave: @openjournals/sbcs-eics, this paper is ready to be accepted and published.
Check final proof :point_right::page_facing_up: Download article
If the paper PDF and the deposit XML files look good in https://github.com/openjournals/joss-papers/pull/3676, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept
@meg-simula I checked the final proof, everything seems correct :)
Great! Now, we are just waiting for one of editors-in-chief to take a look at it - no need to take further action from your side.
@editorialbot recommend-accept
@editorialbot recommend-accept
@editorialbot recommend-accept
Submitting author: !--author-handle-->@EtienneCmb<!--end-author-handle-- (Etienne Combrisson) Repository: https://github.com/brainets/frites Branch with paper.md (empty if default branch): paper Version: v0.4.3 Editor: !--editor-->@meg-simula<!--end-editor-- Reviewers: @madvn, @travisbthomp Archive: 10.5281/zenodo.7278278
:warning: JOSS reduced service mode :warning:
Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@madvn & @travisbthomp, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @meg-simula know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Review checklist for @madvn
✨ Important: Please do not use the Convert to issue functionality when working through this checklist, instead, please open any new issues associated with your review in the software repository associated with the submission. ✨
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
Review checklist for @travisbthomp
✨ Important: Please do not use the Convert to issue functionality when working through this checklist, instead, please open any new issues associated with your review in the software repository associated with the submission. ✨
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper