Closed editorialbot closed 11 months ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.88 T=0.08 s (799.5 files/s, 80364.0 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Python 27 414 980 1377
Jupyter Notebook 5 0 1831 494
reStructuredText 23 297 421 369
TeX 1 8 0 106
YAML 4 11 7 63
Markdown 2 18 0 51
TOML 1 3 0 29
DOS Batch 1 8 1 26
make 1 4 7 9
-------------------------------------------------------------------------------
SUM: 65 763 3247 2524
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Wordcount for paper.md
is 773
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1038/s41593-020-0699-2 is OK
- 10.3389/fnsys.2016.00109 is OK
- 10.1017/CBO9781107447615 is OK
- 10.1038/nature07140 is OK
- 10.4249/scholarpedia.1343 is OK
MISSING DOIs
- 10.1088/0954-898x_15_4_002 may be a valid DOI for title: Maximum likelihood estimation of cascade point-process neural encoding models
INVALID DOIs
- 8.1101/463760 is INVALID
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Dear @clinssen and @Saran-nns
This is the review thread. Firstly, type
@editorialbot generate my checklist
to generate your own checklist. In that checklist, there are many check items. Whenever you complete the corresponding task, you can check off them.
Please write your comments as separate posts and do not modify your checklist descriptions.
The review process is interactive so you can always interact with the authors, reviewers, and the editor. You can also create issues and pull requests in the target repo. Please do mention this thread's URL in the issues so we can keep tracking what is going on out of our world.
Please do not hesitate to ask me about anything, anytime.
Thank you in advance!
@clinssen - this is the review thread of the JOSS submission that you have previously assigned as a reviewer. Could you please generate your task list and start your review?
Thank you in advance.
Thanks to the authors for the development effort and to the editors of JOSS for their excellent work! Please find my initial review below.
Spikeometric is a Python package that has been in development since at least May 30th, 2022. Its total lines of code count is 667, thereby extending the PyTorch Geometric package, which in turn extends PyTorch. It provides functionality related to several classes of linear/nonlinear models in computational neuroscience. The software appears mature, with API documentation, user documentation, tutorials, CI, issue tracker etc. Although small in scale, this is due to its focus on a relatively simple framework (that of linear/nonlinear models), including several model variants and methods for providing stimulus to the network.
Unfortunately, the documentation does not contain any (comparative) benchmarks. Therefore claims of "fast" cannot be verified; however, this claim is quite generic (perhaps "fast" refers to the low time investment necessary by end users?), and is not essential to the software. There does not seem to be a demonstration/example of a very large, "real world size" network. There is no textual comparison to other (similar) tools in the field.
The stimulus generators seem rather basic and it would be interesting to provide, for example, input from files. The generators also seem to generate all spike times ahead of the simulation rather than generating spikes "live" while the simulation is ongoing.
There are two issues with the DOIs that were already observed by the editorial bot.
@lepmik - could you please update your status and inform us on how is your work going?
Hi, a bit unsure of what you mean here. Should we start responding to reviews immediately? - I assumed we should wait until all reviews are done.
@lepmik - Sure, you can start to discuss and apply the changes. The review process of JOSS is interactive so you don't need to wait until the whole stuff ends.
Thank you for the review @clinssen! I'm happy to begin implementing the suggested changes as soon as my exam season is over in a week.
@lepmik - Could you please update your status and inform us on how is your work going? Thank you in advance.
@jbytecode I'm almost done addressing the points brought up by @clinssen. It is true that the 'fast' emphasis is more about friendliness than speed, so I will change the wording in the article to make this apparent. However, I'm also adding a benchmarking section in the documentation. Next, I have added an example where spikeometric is used to run simulations of a big network imported from the MICrONS synapse dataset from the Allen Institute. This will be merged into the main branch as soon as possible. For the stimulus, there is now a class supporting loading stimulus from a file, and an improved system for batching stimulus. Finally, I've fixed the DOI issues identified by the bot and am in the process of adding a discussion of other similar tools to the article. I hope to have everything done by the end of the week at the latest.
Hey again. I've released a new version of spikeometric (1.0.2) on PyPI with the new features. The paper is also updated and in the paper branch. The only thing missing is a benchmarking section of the documentation, which is taking a while because of limited GPU availability. It is, however, in the works. What are the next steps?
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Our reviewers @clinssen and @Saran-nns will share their thoughts with us as soon as they are available. Thank you.
@clinssen and @Saran-nns - Could you please update your status and inform us on how is your review going? Thank you in advance.
hi @jbytecode, i have been following up on the review by @clinssen. Considering the significant changes made, i will resume my review this weekend
I think it makes the most sense to wait for the benchmarks. Please note that I will have reduced availability until September 18th. Cheers!
@Saran-nns - Could you please update your status? Thank you in advance.
Hi @JakobSonstebo, thanks for considering my suggestions. In reference to your earlier response regarding @clinssen's suggestion, you stated that you've implemented a class to facilitate stimulus loading from a file and enhanced the system for stimulus batching. Can these changes be found in the documentation or usage examples and tests?
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Hi @Saran-nns, thank you for your suggestions. An example of how to use the LoadedStimulus class can be found in the simulate_with_stimulus.ipynb file in the examples directory. I notice, however, that it doesn't show up in the overview over the available stimulus classes on the documentation page and will add it there now. For the batching of stimulus, the stimulus classes should be passed a batch_size parameter matching what is sent to the DataLoader class. I'll make sure to edit the examples and documentation to make sure this comes across more clearly.
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1038/s41593-020-0699-2 is OK
- 10.1101/463760 is OK
- 10.3389/fnsys.2016.00109 is OK
- 10.1017/CBO9781107447615 is OK
- 10.1038/nature07140 is OK
- 10.4249/scholarpedia.1343 is OK
- 10.1088/0954-898x_15_4_002 is OK
- 10.1017/CBO9780511541612 is OK
- 10.7554/eLife.47314 is OK
- 10.1007/978-1-4614-7320-6_255-1 is OK
- 10.3389/fninf.2022.883333 is OK
- 10.3389/fncom.2021.627620 is OK
- 10.3389/fninf.2018.00089 is OK
MISSING DOIs
- 10.4249/scholarpedia.1430 may be a valid DOI for title: NEST (NEural Simulation Tool)
- 10.1109/jproc.2023.3308088 may be a valid DOI for title: Training Spiking Neural Networks Using Lessons From Deep Learning
INVALID DOIs
- None
Hi @JakobSonstebo, thanks for the update. Please check the missing DOIs above and also I found a typo in line 62. Once done I am happy to recommend accept. PS: Benchmark looks good to me
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1038/s41593-020-0699-2 is OK
- 10.1101/463760 is OK
- 10.3389/fnsys.2016.00109 is OK
- 10.1017/CBO9781107447615 is OK
- 10.1038/nature07140 is OK
- 10.4249/scholarpedia.1343 is OK
- 10.1088/0954-898x_15_4_002 is OK
- 10.4249/scholarpedia.1430 is OK
- 10.1017/CBO9780511541612 is OK
- 10.7554/eLife.47314 is OK
- 10.1007/978-1-4614-7320-6_255-1 is OK
- 10.3389/fninf.2022.883333 is OK
- 10.3389/fncom.2021.627620 is OK
- 10.3389/fninf.2018.00089 is OK
- 10.48550/arXiv.2109.12894 is OK
MISSING DOIs
- None
INVALID DOIs
- None
Hi again @Saran-nns!
I've fixed the typo and added the missing DOIs. Seems like it looks good now. Thank you for taking the time to go through our work and helping us improve it. Greatly appreciated.
Jakob
Great thanks @JakobSonstebo. @jbytecode my checklist is complete, and I'm happy to recommend accept.
@Saran-nns - thank you!
@clinssen - Sorry for bothering you but we are a little bit ahead of the normal process time-spans for this submission. Could you please update your status? Thank you in advance.
Hi, thanks for all the updates. This looks very good and I'm happy to give it my thumbs up! I hereby consider all of my earlier points resolved, and the checklist to be fully completed.
Just a few small points:
@clinssen - When the corresponding items are all okay, could you please click the checkboxes in your task-list?
@lepmik - please consider applying the suggestions/changes that our reviewer addressed and ping me when you are done with them. thank you in advance
@jbytecode Hi - I've now made changes following @clinssen's suggestions (thank you!).
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1038/s41593-020-0699-2 is OK
- 10.1101/463760 is OK
- 10.3389/fnsys.2016.00109 is OK
- 10.1017/CBO9781107447615 is OK
- 10.1038/nature07140 is OK
- 10.4249/scholarpedia.1343 is OK
- 10.1088/0954-898x_15_4_002 is OK
- 10.4249/scholarpedia.1430 is OK
- 10.1017/CBO9780511541612 is OK
- 10.7554/eLife.47314 is OK
- 10.1007/978-1-4614-7320-6_255-1 is OK
- 10.3389/fninf.2022.883333 is OK
- 10.3389/fncom.2021.627620 is OK
- 10.3389/fninf.2018.00089 is OK
- 10.48550/arXiv.2109.12894 is OK
MISSING DOIs
- None
INVALID DOIs
- None
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot set <DOI here> as archive
@editorialbot set <version here> as version
@editorialbot generate pdf
@editorialbot check references
and ask author(s) to update as needed@editorialbot recommend-accept
@lepmik - The PR (https://github.com/bioAI-Oslo/Spikeometric/pull/28) includes some minor changes on the bibliography file. Please review the PR. If you are agreed with the changes, please merge. thank you in advance.
@editorialbot check references
Submitting author: !--author-handle-->@lepmik<!--end-author-handle-- (Mikkel Elle Lepperød) Repository: https://github.com/bioAI-Oslo/Spikeometric Branch with paper.md (empty if default branch): paper Version: v1.0.3 Editor: !--editor-->@jbytecode<!--end-editor-- Reviewers: @clinssen, @Saran-nns Archive: 10.5281/zenodo.8358903
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@clinssen & @Saran-nns, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @jbytecode know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @Saran-nns
📝 Checklist for @clinssen