Closed editorialbot closed 5 months ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.90 T=0.01 s (560.1 files/s, 23662.6 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Markdown 1 20 0 53
TeX 1 4 0 46
YAML 1 1 0 28
JSON 1 0 0 17
-------------------------------------------------------------------------------
SUM: 4 25 0 144
-------------------------------------------------------------------------------
Commit count by author:
5 Jeremy Magland
1 Cody Baker
Paper file info:
📄 Wordcount for paper.md
is 977
✅ The paper includes a Statement of need
section
License info:
🔴 Failed to discover a valid open source license
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- None
MISSING DOIs
- No DOI given, and none found for title: The neurodata without borders ecosystem for neurop...
- No DOI given, and none found for title: Neurodata without borders: creating a common data ...
- No DOI given, and none found for title: nwbwidgets: Explore the hierarchical structure of ...
- No DOI given, and none found for title: h5wasm: A WebAssembly HDF5 reader/writer library
- No DOI given, and none found for title: zarr-developers/zarr-python: v2.17.1
INVALID DOIs
- None
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
👋🏼 @magland @alexrockhill, @jodeleeuw, @sanjayankur31 this is the review thread for the paper. All of our communications will happen here from now on.
As a reviewer, the first step is to create a checklist for your review by entering
@editorialbot generate my checklist
as the top of a new comment in this thread.
There are additional guidelines in the message at the start of this issue.
Please feel free to ping me (@mstimberg) if you have any questions/concerns. Looking forward to a constructive review!
Please ignore the statement about the licence and the lines of code in the comments above. Those were run in the paper
branch, which doesn't contain code and licence.
As correctly detected in the pre-review, the licence is an OSI-approved Apache 2.0 licence. If you are curious, you can find the LOC count in the pre-review issue as well: https://github.com/openjournals/joss-reviews/issues/6512#issuecomment-2012685994
I'm reviewing the functionality by selecting a few random datasets. Am I supposed to be able to interactively visualize the time series for this dataset https://neurosift.app/?p=/nwb&dandisetId=000122&dandisetVersion=draft&url=https://api.dandiarchive.org/api/assets/37f5e3ce-e1b7-48a7-a47d-6745875eacaa/download/&tab=neurodata-item:/stimulus/presentation/auditory|TimeSeries? I don't see anything in the manuscript about ElectricalTimeSeries and Auditory TimeSeries so this might fall under not yet implemented but it's somewhat similar to the Spike Raster Plot in Figure 3 so even if it is outside of the scope, it would be nice to have within the manuscript a table or some such with what kinds of data are implemented and which are on the roadmap or perhaps even not planned to be implemented.
Are the contents of .vscode cruft? (https://github.com/flatironinstitute/neurosift/blob/main/.vscode) Or do they serve some purpose for the software? I'm guessing these could maybe be removed.
I don't see any tests and it looks like they are not integrated into the CIs of pull requests based on the merged pull requests. This is a requirement for the review
Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support
Just following the letter of the review guidelines here, could you please add instructions specific to where to 3) seek support? Whether that's through the github discussions or some other channel
Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support
Just following the letter of the review guidelines here, could you please add instructions specific to where to 3) seek support? Whether that's through the github discussions or some other channel
There is a simple "Seeking Help" section in the main README.
I don't see any tests and it looks like they are not integrated into the CIs of pull requests based on the merged pull requests. This is a requirement for the review
Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
You are right, there are no automated tests at this point. But the way I read the instructions, this is not strictly necessary. It does have "manual steps described so that the functionality of the software can be verified". The main README contains several ways that you can verify that the software is functioning. If you want to test that the software works for visualizing a local .nwb file, you could download a relatively small example from here. If it would be helpful I can update the readme to point to that example.
Are the contents of .vscode cruft? (https://github.com/flatironinstitute/neurosift/blob/main/.vscode) Or do they serve some purpose for the software? I'm guessing these could maybe be removed.
One of the tasks was not being used, and so I removed that just now. The other two commands are used for incrementing the version of the Python package and deploying to pypi -- so those are still needed.
I'm reviewing the functionality by selecting a few random datasets. Am I supposed to be able to interactively visualize the time series for this dataset https://neurosift.app/?p=/nwb&dandisetId=000122&dandisetVersion=draft&url=https://api.dandiarchive.org/api/assets/37f5e3ce-e1b7-48a7-a47d-6745875eacaa/download/&tab=neurodata-item:/stimulus/presentation/auditory|TimeSeries? I don't see anything in the manuscript about ElectricalTimeSeries and Auditory TimeSeries so this might fall under not yet implemented but it's somewhat similar to the Spike Raster Plot in Figure 3 so even if it is outside of the scope, it would be nice to have within the manuscript a table or some such with what kinds of data are implemented and which are on the roadmap or perhaps even not planned to be implemented.
TimeSeries objects are usually supported in Neurosift, but the one you found is unusual in that it does not have numeric values (see screenshot below). That's why it does not display. I have opened an issue for this to provide a helpful message when this happens.
Thanks for the suggestion about the table of supported data types. I will work on that.
I'm reviewing the functionality by selecting a few random datasets. Am I supposed to be able to interactively visualize the time series for this dataset https://neurosift.app/?p=/nwb&dandisetId=000122&dandisetVersion=draft&url=https://api.dandiarchive.org/api/assets/37f5e3ce-e1b7-48a7-a47d-6745875eacaa/download/&tab=neurodata-item:/stimulus/presentation/auditory|TimeSeries? I don't see anything in the manuscript about ElectricalTimeSeries and Auditory TimeSeries so this might fall under not yet implemented but it's somewhat similar to the Spike Raster Plot in Figure 3 so even if it is outside of the scope, it would be nice to have within the manuscript a table or some such with what kinds of data are implemented and which are on the roadmap or perhaps even not planned to be implemented.
TimeSeries objects are usually supported in Neurosift, but the one you found is unusual in that it does not have numeric values (see screenshot below). That's why it does not display. I have opened an issue for this to provide a helpful message when this happens.
Thanks for the suggestion about the table of supported data types. I will work on that.
@alexrockhill I have created a table of neurodata types (linked from main README) that lists the available neurosift plugins. The goal is to eventually support all types with plugins.
I have also resolved the above issue.
There is a simple "Seeking Help" section in the main README.
I saw that and looks great, except that if someone has a question like "does Neurosift do x?" or "how can I do x with Neurosift?" the community standard is not to have those type of questions as issues (issues being for bugs and enhancements). There's a newish GitHub discussions that is an easy option for this. I think ideally you would not want someone posting to issues seeking help so it would be great to clarify. Other projects like jsPsych use Github discussions https://www.jspsych.org/7.3/support/support/ or Discourse is another option e.g. https://mne.discourse.group/ or a mailing list is the most classic.
There is a simple "Seeking Help" section in the main README.
I saw that and looks great, except that if someone has a question like "does Neurosift do x?" or "how can I do x with Neurosift?" the community standard is not to have those type of questions as issues (issues being for bugs and enhancements). There's a newish GitHub discussions that is an easy option for this. I think ideally you would not want someone posting to issues seeking help so it would be great to clarify. Other projects like jsPsych use Github discussions https://www.jspsych.org/7.3/support/support/ or Discourse is another option e.g. https://mne.discourse.group/ or a mailing list is the most classic.
Thanks! I have added a link to Discussions and made an initial welcome post there.
You are right, there are no automated tests at this point. But the way I read the instructions, this is not strictly necessary. It does have "manual steps described so that the functionality of the software can be verified". The main README contains several ways that you can verify that the software is functioning. If you want to test that the software works for visualizing a local .nwb file, you could download a relatively small example from here. If it would be helpful I can update the readme to point to that example.
I think following the letter here it would be nice to have specific standard instructions for how to run through a test. If I were to add a PR for a feature for instance, how would I know that I hadn't broken anything?
I think following the letter here it would be nice to have specific standard instructions for how to run through a test. If I were to add a PR for a feature for instance, how would I know that I hadn't broken anything?
Okay, I'll see what I can do!
I think following the letter here it would be nice to have specific standard instructions for how to run through a test. If I were to add a PR for a feature for instance, how would I know that I hadn't broken anything?
@alexrockhill I have added a table of tests for many of the view plugins. In the developer instructions there is now information on how to bring this table up for a local development server, which will allow you to see if you have broken anything as you improve the existing views or add new views.
I don't see the table of tests when I follow the instructions, unfortunately
@alexrockhill I wonder if you have pulled the latest code in the submodule?
git pull --recurse-submodules
Otherwise, looks great with the tests. I think you should spell out that contributors should compare the output with that on the main
branch in order to make sure that their changes didn't break anything although I understand that this is fairly obvious.
One recommendation that I have is combining (probably copying in addition to the user facing) the table of supported plugins with the tests so that there is a test for each supported pluggin. I am trying to check this point: Functionality: Have the functional claims of the software been confirmed?
in the review. For this point in the review Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
, it would help to extend the table to have examples of datasets that demonstrate the use of each of the pluggins. Perhaps this list is the same as that of the tests and maybe it would stay that way but they could also become different (e.g. smaller, lighter datasets in the tests that load quickly for quick verification).
Lastly, in checking a few other datasets, should you be able to view this dataset https://neurosift.app/?p=/nwb&dandisetId=000576&dandisetVersion=0.231010.1811&url=https://api.dandiarchive.org/api/assets/57485b5a-e259-4759-a17a-4bef1e7f3886/download/&tab=neurodata-item:/processing/ecephys/LFP/ecephys.ieeg|ElectricalSeries relative to the events like in Figure 2 of the paper or is that not part of the functionality claims? Either way, I think a bit of clarification in the paper as far as whether you are supposed to be able to just view the hierarchical data and raw traces or if basic preliminary event-locked analyses are available for all supported data types would be helpful.
That concludes my review, I am strongly in favor of publication in JOSS, excellent work and a very helpful software tool that I'm sure will be widely adopted and useful to many people. The comments I have made are minor and mostly relate to documentation so that people can better understand and appreciate the functionality of this tool (and also a small comment about making contributing easier). Cheers!
@alexrockhill I wonder if you have pulled the latest code in the submodule?
git pull --recurse-submodules
alexrockhill@james:~/Downloads/neurosift$ git pull --recurse-submodules
Fetching submodule gui/fi-sci
Already up to date.
Thanks so much @alexrockhill ! Your comments and suggestions have been really helpful for polishing the software and docs.
I fixed the issue with the tests page on the dev server. I had neglected to update the submodule pointer in the main repo.
I think you should spell out that contributors should compare the output with that on the main branch in order to make sure that their changes didn't break anything although I understand that this is fairly obvious.
This is done.
Lastly, in checking a few other datasets, should you be able to view this dataset https://neurosift.app/?p=/nwb&dandisetId=000576&dandisetVersion=0.231010.1811&url=https://api.dandiarchive.org/api/assets/57485b5a-e259-4759-a17a-4bef1e7f3886/download/&tab=neurodata-item:/processing/ecephys/LFP/ecephys.ieeg|ElectricalSeries relative to the events like in Figure 2 of the paper or is that not part of the functionality claims?
If I am understanding this correctly... yes you can create a synchronized view like this https://neurosift.app/?p=/nwb&dandisetId=000576&dandisetVersion=0.231010.1811&url=https://api.dandiarchive.org/api/assets/57485b5a-e259-4759-a17a-4bef1e7f3886/download/&tab=neurodata-items:neurodata-item:/processing/ecephys/LFP/ecephys.ieeg|ElectricalSeries@neurodata-item:/intervals/trials|TimeIntervals&tab-time=0,360.80789424126243,95.9129592790732 I need to update the docs to clarify how you create this view -- you need to click two checkboxes and then click the "view 2 items" button on the left. I will continue to work on this.
Yes, that's exactly what I was talking about, but, unfortunately, I don't see the data, it doesn't appear for some reason.
@alexrockhill When you click on my link you don't see what is in my screenshot? If you are not able to then I'm not sure how to explain that. But if you do see it... notice the message "Zoom in" too see the data. You can also press Shift+O (documented behind the question mark in the left panel) to override.
Thought I would nudge this review since there hasn't been any activity for a couple weeks. Thanks! @mstimberg
Hi @magland , sorry, I didn't want to do one while Alex and you were going through things. I'll do mine this week.
@alexrockhill When you click on my link you don't see what is in my screenshot? If you are not able to then I'm not sure how to explain that. But if you do see it... notice the message "Zoom in" too see the data. You can also press Shift+O (documented behind the question mark in the left panel) to override.
Ah yes, my bad, I was zooming in and not seeing the data but I just had to zoom in more. It's a bit disconcerting not to have any data shown when you load up, although the message is helpful. Is there lazy loading and would it be doable enough to get a highly decimated version to show when it's zoomed out? This isn't necessary for functionality but I think it would be nice for design.
Let me know when you're able to add example links to https://github.com/flatironinstitute/neurosift/blob/main/doc/neurodata_types.md and I will check off all the items in my review. I just think the project needs this documentation in order for users to fully understand the functionality and for contributors to have a somewhat comprehensive list of examples to check that they didn't break when they add a PR.
Thanks for the nudge @magland, I have indeed not been following as closely as I should. @alexrockhill, thanks for all your comments so far, and @sanjayankur31, looking forward to your review. @jodeleeuw Do you have an estimate as to when you will be able to take a look at the paper/software?
Apologies for my delay. I will get to this by the end of this week!
Let me know when you're able to add example links to https://github.com/flatironinstitute/neurosift/blob/main/doc/neurodata_types.md and I will check off all the items in my review. I just think the project needs this documentation in order for users to fully understand the functionality and for contributors to have a somewhat comprehensive list of examples to check that they didn't break when they add a PR.
@alexrockhill I put in some links in cases where I could find a reasonable public example on DANDI. LMK if this is sufficient.
Excellent, thank you for adding those, I've marked all the items on my review.
You are right, there are no automated tests at this point. But the way I read the instructions, this is not strictly necessary. It does have "manual steps described so that the functionality of the software can be verified". The main README contains several ways that you can verify that the software is functioning. If you want to test that the software works for visualizing a local .nwb file, you could download a relatively small example from here. If it would be helpful I can update the readme to point to that example.
I think this would be helpful to add the link to the file. I'm not a user of NWB, so I'm not the target audience here, but this did trip me up when trying to get started. I could imagine it being useful for folks who just want to quickly try the local version out but don't have a small file handy.
For documentation, I think it would be helpful to have an entry point in the main README that describes what documentation is available and includes a suggestion about where to get started for undocumented features. I wasn't sure where to look to find documentation initially, and ultimately got there through the link to the developer setup instructions in the README.
You are right, there are no automated tests at this point. But the way I read the instructions, this is not strictly necessary. It does have "manual steps described so that the functionality of the software can be verified". The main README contains several ways that you can verify that the software is functioning. If you want to test that the software works for visualizing a local .nwb file, you could download a relatively small example from here. If it would be helpful I can update the readme to point to that example.
I think this would be helpful to add the link to the file. I'm not a user of NWB, so I'm not the target audience here, but this did trip me up when trying to get started. I could imagine it being useful for folks who just want to quickly try the local version out but don't have a small file handy.
@jodeleeuw I have added the following to the README
If you do not have an NWB file but want to try out this functionality, you can download one of these relative small files from DANDI.
For documentation, I think it would be helpful to have an entry point in the main README that describes what documentation is available and includes a suggestion about where to get started for undocumented features. I wasn't sure where to look to find documentation initially, and ultimately got there through the link to the developer setup instructions in the README.
Thanks for the suggestion @jodeleeuw . I have reorganized the README a bit and there is a new section called
Let me know if you notice anything missing from there.
Hi @magland , looks very good. I just had some trouble running a local instance to view local files. The rest looks good to me :+1:
I wasn't able to view local files, for example, this one :
$ python -m venv .venv
$ source .venv/bin/activate
$ python --version
Python 3.12.3
$ neurosift view-nwb ./FergusonEtAl2015.nwb
npm version: 10.5.0
node version: v20.12.2
added 64 packages, and audited 65 packages in 698ms
12 packages are looking for funding
run `npm fund` for details
found 0 vulnerabilities
Opening https://flatironinstitute.github.io/neurosift/?p=/nwb&url=http://localhost:53235/files/FergusonEtAl2015.nwb
> local-file-access-js@0.0.1 start
> node src/index.js /tmp/view_nwb_0yAeWVax
Serving files in /tmp/view_nwb_0yAeWVax
Serving files in /tmp/view_nwb_0yAeWVax on port 53235.
15:16:52 INFO: Opening in existing instance
Error sending file: Error: Request aborted
at onaborted (/home/asinha/dump/neurosift/.venv/lib/python3.12/site-packages/neurosift/local-file-access-js/node_modules/express/lib/response.js:1061:15)
at onfinish (/home/asinha/dump/neurosift/.venv/lib/python3.12/site-packages/neurosift/local-file-access-js/node_modules/express/lib/response.js:1097:50)
at AsyncResource.runInAsyncScope (node:async_hooks:206:9)
at listener (/home/asinha/dump/neurosift/.venv/lib/python3.12/site-packages/neurosift/local-file-access-js/node_modules/on-finished/index.js:170:15)
at onFinish (/home/asinha/dump/neurosift/.venv/lib/python3.12/site-packages/neurosift/local-file-access-js/node_modules/on-finished/index.js:101:5)
at callback (/home/asinha/dump/neurosift/.venv/lib/python3.12/site-packages/neurosift/local-file-access-js/node_modules/ee-first/index.js:55:10)
at Socket.onevent (/home/asinha/dump/neurosift/.venv/lib/python3.12/site-packages/neurosift/local-file-access-js/node_modules/ee-first/index.js:93:5)
at Socket.emit (node:events:530:35)
at emitErrorNT (node:internal/streams/destroy:169:8)
at emitErrorCloseNT (node:internal/streams/destroy:128:3) {
code: 'ECONNABORTED'
}
The app does open:
https://neurosift.app/?p=/nwb&url=http://localhost:53235/files/FergusonEtAl2015.nwb
This is on a Fedora Linux 40 system. I thought it may be firewall related, but stopping firewalld didn't do anything.
I wasn't able to view the file using the live site either. This didn't quite seem to work:
The page seems to stop at Loading https://github.com/OpenSourceBrain/NWBShowcase/raw/master/FergusonEtAl2015/FergusonEtAl2015.nwb
Devtools show errors of this form:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://github.com/OpenSourceBrain/NWBShowcase/raw/master/FergusonEtAl2015/FergusonEtAl2015.nwb. (Reason: CORS header 'Access-Control-Allow-Origin' does not match '').
I haven't been able to check with NWB files hosted elsewhere yet, though, so this could be specific to GitHub.
The live site works well, but I didn't see a way of loading local files there. (Viewing local files is a little important because that'll allow users to use the tool during the research cycle, before they've published their data.)
In addition to NWB Widgets, it's also worth looking at NWB Explorer that aims to do something similar. It does not include streaming support at the moment, though.
General:
Hi @magland , looks very good. I just had some trouble running a local instance to view local files. The rest looks good to me 👍
Thanks @sanjayankur31
I wasn't able to view local files, for example, this one : ...
This was a CORS issue. It should be fixed now, with neurosift on pypi at version 0.2.8. I verified it worked with the following command using chrome and firefox
neurosift view-nwb FergusonEtAl2015.nwb
I wasn't able to view the file using the live site either. This didn't quite seem to work:
The page seems to stop at
Loading https://github.com/OpenSourceBrain/NWBShowcase/raw/master/FergusonEtAl2015/FergusonEtAl2015.nwb
This is also a CORS issue. While there is no way to get that particular URL to work due to the github server configuration being beyond our control, this other URL to the same file appears to work
In addition to NWB Widgets, it's also worth looking at NWB Explorer that aims to do something similar. It does not include streaming support at the moment, though.
Thank you. I have added this to a new "See also" section in the readme.
General:
- has there been an attempt to contribute the improvements to h5wasm back to upstream so that the benefits are available to the larger community too?
I agree that this could be beneficial to the larger community, and I have explored this possibility. Unfortunately my modifications were to the emscripten generated code, and so if this were to be done properly, the PR would need to go to emscripten, which would require a lot more effort than I am prepared to expend at this point. Very low-level modifications were required to achieve efficient reading of chunks from the remote file.
Thanks @magland , it all looks good to me now :+1:
Thanks everyone for the reviews! The software, especially the documentation, improved a lot during the process.
@mstimberg what's the next step?
@bendichter @CodyCBakerPhD @jsoules
Many thanks @alexrockhill, @jodeleeuw, and @sanjayankur31 for your detailed and constructive reviews. It seems that you all recommend the publication of the software/paper in its current form (if not, please let me know :smile: ). @sanjayankur31, one item in your checklist is still unchecked, but I take your earlier comment as indicating that you are happy with everything.
@magland The next steps are some final checks by myself and preparing everything for the actual publication process. I will create a checklist in the next comment with some action items for you (and some more for me).
@editorialbot set <DOI here> as archive
@editorialbot set <version here> as version
@editorialbot generate pdf
@editorialbot check references
and ask author(s) to update as needed@editorialbot recommend-accept
Submitting author: !--author-handle-->@magland<!--end-author-handle-- (Jeremy Magland) Repository: https://github.com/flatironinstitute/neurosift Branch with paper.md (empty if default branch): paper Version: v0.2.8 Editor: !--editor-->@mstimberg<!--end-editor-- Reviewers: @alexrockhill, @jodeleeuw, @sanjayankur31 Archive: 10.5281/zenodo.11192864
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@alexrockhill & @jodeleeuw & @sanjayankur31, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @mstimberg know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @jodeleeuw
📝 Checklist for @sanjayankur31
📝 Checklist for @alexrockhill