AllenInstitute / openscope_databook

OpenScope databook: a collaborative, versioned, data-centric collection of foundational analyses for reproducible systems neuroscience 🐁🧠🔬🖥️📈
https://alleninstitute.github.io/openscope_databook
Other
59 stars 18 forks source link

Review: suggestions for "Visualizing NWB Files" section #379

Closed stephprince closed 6 days ago

stephprince commented 2 months ago

Here is my feedback on the "Visualizing NWB Files" section - thought all the figures looked very nice and were very informative. I will push any small markdown changes to the scientific_review branch along with any from the "Basics" section.

Visualizing Raw 2-Photon Images

Visualizing Neuropixel Probe Locations

File ~/anaconda3/envs/openscope-databook-2/lib/python3.10/site-packages/ccfwidget/init.py:7 3 version = '0.5.3' 5 all = ['CCFWidget'] ----> 7 from .widget_ccf import CCFWidget

File ~/anaconda3/envs/openscope-databook-2/lib/python3.10/site-packages/ccfwidget/widget_ccf.py:7 5 from ipywidgets import HBox, VBox, register, link, RadioButtons, Checkbox, Output 6 import itk ----> 7 from itkwidgets import view 8 import numpy as np 9 from traitlets import CFloat, CInt, List, Unicode, validate

File ~/anaconda3/envs/openscope-databook-2/lib/python3.10/site-packages/itkwidgets/init.py:13 1 all = ['version_info', 'version', 2 'Viewer', 'view', 3 'checkerboard', (...) 8 'lut', 9 '_jupyter_nbextension_paths'] 11 from ._version import version_info, version ---> 13 from .widget_viewer import Viewer, view 14 from .widget_compare import compare 15 from .widget_checkerboard import checkerboard

File ~/anaconda3/envs/openscope-databook-2/lib/python3.10/site-packages/itkwidgets/widget_viewer.py:19 17 from traitlets import CBool, CFloat, CInt, Unicode, CaselessStrEnum, List, validate, TraitError, Tuple 18 from ipydatawidgets import NDArray, array_serialization, shape_constraints ---> 19 from .trait_types import ITKImage, ImagePointTrait, ImagePoint, PointSetList, PolyDataList, itkimage_serialization, image_point_serialization, polydata_list_serialization, Colormap, LookupTable, Camera 21 try: 22 import ipywebrtc

File ~/anaconda3/envs/openscope-databook-2/lib/python3.10/site-packages/itkwidgets/trait_types.py:25 22 except ImportError: 23 pass ---> 25 from ._transform_types import to_itk_image, to_point_set, to_geometry 26 from ipydatawidgets import array_serialization, NDArray 28 # from IPython.core.debugger import set_trace

File ~/anaconda3/envs/openscope-databook-2/lib/python3.10/site-packages/itkwidgets/_transform_types.py:117 112 process_group(group, result) 113 return result 116 _itk_pixel_to_vtkjs_type_components = { --> 117 itk.SC: ('Int8Array', 1), 118 itk.UC: ('Uint8Array', 1), 119 itk.SS: ('Int16Array', 1), 120 itk.US: ('Uint16Array', 1), 121 itk.SI: ('Int32Array', 1), 122 itk.UI: ('Uint32Array', 1), 123 itk.F: ('Float32Array', 1), 124 itk.D: ('Float64Array', 1), 125 } 127 def _vtk_to_vtkjs(data_array): 128 from vtk.util.numpy_support import vtk_to_numpy

AttributeError: module 'itk' has no attribute 'SC'



### Visualizing Unit Quality Metrics
* A brief statement on what "units" refer to at the very beginning of this section could be helpful before diving into the metrics.
* **Quality**: It would be helpful to add a sentence specifying where this "noise" classification comes from (straight from Kilosort or a custom pipeline?)
* **Amplitude Cutoff**: The plots illustrating what the amplitude cutoff metric means are very helpful! However, I don't think the example unit 94 has that much lower of an amplitude cutoff than unit 3. Unit 2348 is specified in the markdown text and looks like a good illustration of a low amplitude cutoff, maybe switch back to that one for the second example?

### Visualizing LFP Responses to Stimulus
* I suggest changing the title to "Visualizing LFP Responses to Stimuli" or "Visualizing LFP Responses to Stimulus Events"
* **LFP Interpolation**: I am a little confused by the motivation here behind LFP interpolation. I would have thought that since LFP is usually acquired at a regular sampling rate and stimulus information is presented at irregularly spaced intervals, then interpolation of the LFP might occur after selecting the stimulus timestamps and time windows relative to those irregular timestamps. This sentence maybe implies that approach: "After you have a valid list of stimulus timestamps, you can generate a linearly-spaced timestamp array called `time_axis`, and interpolate the LFP data along it, making interpolated LFP data called `interp_lfp`," but this stimulus timestamp information does not get used until after the LFP data is interpolated from start to end. I think for the purposes of illustrating stimulus LFP responses this approach is ok, but maybe remove the reference to "After you have a valid list of stimulus timestamps,"  in the interpolation section then.
* **Visualizing LFP Traces**: It's not necessary but it might be helpful for these plots to label the x-axis "time relative to stimulus onset (s)" and have something like a dashed axvline in the background at t=0 to show the stimulus onset.

### Visualizing Neuronal Unit Responses
* **Waveform Image**: "Because the channels are arranged into two rows along the length of a probe, typically a unit is only strongly detected by every other channel." It might be helpful to specify that this pattern is specific to Neuropixels and data from other types of probes would have different detection patterns. However, since the OpenScope data is all from Neuropixels I'm not sure this is necessary.
* **Average waveform across channels**: It's not clear to me when you would want the average waveform across all channels, especially if there are hundreds of channels and most of these could not detect the same unit. Could you add a sentence with a use case for this?
rcpeene commented 6 days ago

Looking for permission to close this issue @stephprince. Any additional feedback?

stephprince commented 6 days ago

all good with me!