Following up on @ctoroserey and @PeerHerholz 's resurrection of this project (thank you guys), I finally found the time to look into the things that are (in my opinion) still missing before we can submit this to the JOSS.
Following is a todo-list of things I consider still to be open (can be discussed):
Minor revisions
[ ] Github Issues: Before submitting we need to make sure that all important issues are taken care of.
[ ] We need to make sure that the author's list is up to date. Especially relevant for the authors of the paper and their affiliation and ORCID numbers.
[ ] Finishing writing the paper. @ctoroserey, I've added your text snippet from slack. Additional motivation/inspiration could be taken from my other JOSS paper, the atlasreader.
[ ] With respect to the R-plots for the paper publication: I would recommend to (1) add title for the two matrix plots, (2) change the column/row names so that it perhaps lists something like "_roi0", "_roi1", "_roi2", ... and (3) change the x-axis of the line plots to logscale to better spread the "number of cluster" datapoints (I implemented something like this in Benchmark.ipynb.
[ ] The paper is also still missing a paper.bib file with the references used in the text.
[ ] The data folder contains .gii files in the hcp folder which are never used throughout the scripts or notebooks. Can these figures be removed?
Additional notes
As a note, I've removed the docs folder and any other documents which were related to potentially creating some github pages. Reason for this is, that I don't think that this is necessary, and I don't have the time to set this up myself :-)
Major revisions
Notebooks
[ ] Currently, the notebooks folder contains two notebooks: One called Demo and one called Benchmarking. If we want to keep both notebooks, we should highlight what their differences are and what kind of insights can be gained from which one of those.
[ ] Not yet sure how to best do it, but I think the Benchmarking notebook should be written in such a way, that the simulation could be run again if needed, e.g. with something like run_simulation=False.
[ ] I'm not yet 100% happy with the layout/structure of both notebooks. The content is good, but I have the feeling that some additional steps in between are missing. Not sure if it needs more visualizations, or more step by step explanations. The example notebook from atlasread might serve as inspiration or additional example.
[ ] Additionally, not all functions in the code base are also showcased and explored in the Demo notebook.
[ ] The Demo.ipynb also still contains a "Things We Learned" and a "Thanks to" section. Both of them should probably be removed.
[ ] At the end, these notebooks should ideally be connected to a mybinder instance so that they can be run and explored in the cloud.
Test suite
First, the test suite is currently only testing the Adjacency.py functions. Ideally, we would have a test suite ready that also tests all the other functions:
Second, once these tests are ready. I would recommend to setup travis to test these tests and the notebooks after each commit. Setup could again be inspired by the one from atlasreader.
Hello everyone,
Following up on @ctoroserey and @PeerHerholz 's resurrection of this project (thank you guys), I finally found the time to look into the things that are (in my opinion) still missing before we can submit this to the JOSS.
Following is a todo-list of things I consider still to be open (can be discussed):
Minor revisions
Benchmark.ipynb
.paper.bib
file with the references used in the text.data
folder contains.gii
files in thehcp
folder which are never used throughout the scripts or notebooks. Can these figures be removed?Additional notes
As a note, I've removed the
docs
folder and any other documents which were related to potentially creating some github pages. Reason for this is, that I don't think that this is necessary, and I don't have the time to set this up myself :-)Major revisions
Notebooks
notebooks
folder contains two notebooks: One calledDemo
and one calledBenchmarking
. If we want to keep both notebooks, we should highlight what their differences are and what kind of insights can be gained from which one of those.Benchmarking
notebook should be written in such a way, that the simulation could be run again if needed, e.g. with something likerun_simulation=False
.atlasread
might serve as inspiration or additional example.Demo
notebook.Demo.ipynb
also still contains a "Things We Learned" and a "Thanks to" section. Both of them should probably be removed.Test suite
First, the test suite is currently only testing the
Adjacency.py
functions. Ideally, we would have a test suite ready that also tests all the other functions:Second, once these tests are ready. I would recommend to setup travis to test these tests and the notebooks after each commit. Setup could again be inspired by the one from atlasreader.
Best, Michael