AllenInstitute / openscope_databook

OpenScope databook: a collaborative, versioned, data-centric collection of foundational analyses for reproducible systems neuroscience 🐁🧠🔬🖥️📈
https://alleninstitute.github.io/openscope_databook
Other
58 stars 18 forks source link

Review from jiaxx lab #394

Closed Laohusong closed 16 hours ago

Laohusong commented 1 month ago

Review from jiaxx lab The databooks are nice and efficient! I made some small changes, which you can find in the committed "jiaxx review" branch.

(I run the codes locally with this device : OS:Windows 10 Professional version 22H2 Cpu: 13th Gen Intel(R) Core(TM) i7-13700KF   3.40 GHz RAM: 64.0 GB  Gpu: NVIDIA RTX 3060 Virtual environment: conda environment with python 3.9 just as the instructions requires)

But when I tried this, the kernel raise an error:

The Kernel crashed while executing code in the current cell or a previous cell. 
Please review the code in the cell(s) to identify a possible cause of the failure. 
Click here for more info. 
View Jupyter log for further details.

and here is the slice of the log file where it raised the error:

10:38:26.397 [warn] Cell completed with errors hd [Error]: name 'HDF5IO' is not defined
    at r.execute (c:\Users\~\.vscode\extensions\ms-toolsai.jupyter-2024.4.0-win32-x64\dist\extension.node.js:311:4943) {
  ename: 'NameError',
  evalue: "name 'HDF5IO' is not defined",
  traceback: [
    '\x1B[1;31m---------------------------------------------------------------------------\x1B[0m',
    '\x1B[1;31mNameError\x1B[0m                                 Traceback (most recent call last)',
    'Cell \x1B[1;32mIn[6], line 1\x1B[0m\n' +
      '\x1B[1;32m----> 1\x1B[0m \x1B[43mHDF5IO\x1B[49m(filepath, mode\x1B[38;5;241m=\x1B[39m\x1B[38;5;124m"\x1B[39m\x1B[38;5;124mr\x1B[39m\x1B[38;5;124m"\x1B[39m, load_namespaces\x1B[38;5;241m=\x1B[39m\x1B[38;5;28;01mTrue\x1B[39;00m)\n' +
      '\x1B[0;32m      2\x1B[0m nwb \x1B[38;5;241m=\x1B[39m io\x1B[38;5;241m.\x1B[39mread()\n' +
      '\x1B[0;32m      4\x1B[0m \x1B[38;5;28mprint\x1B[39m(nwb)\n',
    "\x1B[1;31mNameError\x1B[0m: name 'HDF5IO' is not defined"
  ]
}

I haven't found the reason. Maybe it is related to the nwb format itself because when I tested our own nwb with the same virtual environment, it performed well.

AttributeError                            Traceback (most recent call last)

File ~\AppData\Roaming\Python\Python39\site-packages\ipywidgets\widgets\widget.py:757, in Widget._handle_msg(self, msg)

    755         if 'buffer_paths' in data:

    756             _put_buffers(state, data['buffer_paths'], msg['buffers'])

--> 757         self.set_state(state)

    759 # Handle a state request.

    760 elif method == 'request_state':

File ~\AppData\Roaming\Python\Python39\site-packages\ipywidgets\widgets\widget.py:626, in Widget.set_state(self, sync_data)

    623 if name in self.keys:

    624     from_json = self.trait_metadata(name, 'from_json',

    625                                     self._trait_from_json)

--> 626     self.set_trait(name, from_json(sync_data[name], self))

File c:\Users\laohu\anaconda3\envs\databook\lib\contextlib.py:126, in _GeneratorContextManager.exit(self, typ, value, traceback)

    124 if typ is None:

    125     try:

--> 126         next(self.gen)

    127     except StopIteration:

    128         return False

File ~\AppData\Roaming\Python\Python39\site-packages\traitlets\traitlets.py:1371, in HasTraits.hold_trait_notifications(self)

   1369 for changes in cache.values():

   1370     for change in changes:

-> 1371         self.notify_change(change)

File ~\AppData\Roaming\Python\Python39\site-packages\ipywidgets\widgets\widget.py:687, in Widget.notify_change(self, change)

    684     if name in self.keys and self._should_send_property(name, getattr(self, name)):

    685         # Send new state to front-end

    686         self.send_state(key=name)

--> 687 super(Widget, self).notify_change(change)

File ~\AppData\Roaming\Python\Python39\site-packages\traitlets\traitlets.py:1386, in HasTraits.notify_change(self, change)

   1384 def notify_change(self, change):

   1385    """Notify observers of a change event"""

-> 1386     return self._notify_observers(change)

File ~\AppData\Roaming\Python\Python39\site-packages\traitlets\traitlets.py:1431, in HasTraits._notify_observers(self, event)

   1428 elif isinstance(c, EventHandler) and c.name is not None:

   1429     c = getattr(self, c.name)

-> 1431 c(event)

File ~\AppData\Roaming\Python\Python39\site-packages\nwbwidgets\base.py:161, in lazy_tabs..on_selected_index(change)

    159 def on_selected_index(change):

    160     if isinstance(change.owner.children[change.new], widgets.HTML):

--> 161         children[change.new] = vis2widget(tabs_spec[change.new]1)

    162         change.owner.children = children

File ~\AppData\Roaming\Python\Python39\site-packages\nwbwidgets\misc.py:1188, in RasterGridWidget.init(self, units, trials, unit_index, units_trials_controller)

   1179     units_trials_controller = UnitsAndTrialsControllerWidget(

   1180         units=units,

   1181         trials=trials,

   1182         unit_index=unit_index

   1183     )

   1184     self.children = [units_trials_controller]

   1186 self.fig = interactive_output(

   1187     f=raster_grid,

-> 1188     controls=units_trials_controller.controls,

   1189     fixed=units_trials_controller.fixed

   1190 )

   1192 self.children += tuple([self.fig])

AttributeError: 'UnitsAndTrialsControllerWidget' object has no attribute 'controls'**



I also tested with our own nwb, the same error raised. One possible reason is that the nwb doesn't include such information. So I changed the `default_neurodata_vis_spec` and displayed the nwbwidget with hiding the 
 'Raster Grid', 'Tuning Curves', 'Combined', 'Grouped PSTH' buttons.('Grouped PSTH is also hidden because the nwb doest contain its information either') It may help in exploring the widget freely.

- first order - receptive fields
3. **Getting fields** the final results display a list of "none" before the image, which is attributed to the final two rows of codes. I use for-loops as a replacement to hide the none list.

- higher order - cebra time
4. **Create Environment** when importing cebra, it raises error that torch is not installed. I completely followed the environment setup instructions on the PC. Maybe torch should be added into the requirement list with suggestions to install the gpu version.

- Visualization - Visualizing Neuropixel Probes(now Visualizing **Neuropixels** Probes)
5. The correct name for the probe should be "neuropixels" not "neuropixel". I corrected the names in this notebook. But same corrections should be made in others files.
6. **Rendering** the displayed ccf widget is cool indeed. But I have to point that the rendered probe locations are completely wrong!  The probes lie in the same plane and even intersect at certain points. I have tested not only the default data, but also the  “dandiset_id = "000021"dandi_filepath = "sub-699733573/sub-699733573_ses-715093703.nwb"” one. They have the same issue. 
![](https://lh7-us.googleusercontent.com/3wuiy1WRLU5hhjr4hcvzGwDIL_DTPJeaQ6wPHAoRj5Vzd9zTe-MK5DhOFQr4forY-SPqwTbwTwBuTdm6EumldINOMLEpOi_j3LWbCg7U7pdNuspSn9Leipl0O9R4K6m7YyEUwS2WxBj4xrekkTmdUcU)
![](https://lh7-us.googleusercontent.com/8zDHTCk6JEpen_f1KNgAZnoTQqYgLHb5mccRCCK4oXHMGhkMf6MO9A1q2iCAMAUwwtNEG4nqTB3ciCcoSTCqwz1SOdMSgORVp3U0r-PTKwaXMiV52YfMmvHPbnOBkOj2NRYVjUrJ3pyoB0fxrH_JsP8)
(You can see they are in the same plane from this figure)

I then drew the probe locations without ccf widget, just in a 3D graph with “plotly”. The problem still exists. It may indicate that the problem is caused by the data and ccf widget itself works well.
![](https://lh7-us.googleusercontent.com/OjJWFv32-np1au1MEC547dvgzLvvOkt7QlxSoW-1s7M4w6bvXZKOlWgy2T_nuRvU5KzX2puHBYR0iruNsWAEAtaKMq5vI3Yq_9RnzZu9AjmpROQQF_SnDw3pr6EePt9k98FY6AbS-Piy5wYXBYhCCJQ)![](https://lh7-us.googleusercontent.com/7YMUvwzwtdbZYfSgfrXwEWqPYS5cSoXNcUkDvEXK-_tFJyuwk2amFRZFXrUJ8C-4ZonmWsM0yzRhq61CtT9pC7fbMJDJ5VeZwnHmMzVF7tGTuxAl8qlO-IHDKy1Gcx61p2D3kPYKJAzdy_L3KzIN6cc)
The raw data itself may have problems or the probe points should go through some transformations before being drawn. If the former reason, it is necessary to change the default example and give instructions to inform users the possible problems of other data. Our lab can provide some correct examples if in need.

- Visualization - visualize unit responses
7. **Showing Spike Times** the "units spikes over time" looks a bit strange in that the xticklabels are not regular and even do not include the zero point. I made small changes to the x values to  avoid it.

- Visualization - visualize unit metrics
8. **Quality** The explainations for the noise template results may be confusing because freshmen may mistake it as the final quality classifications from the simple property name "quality". I added more explainations to introduce where it comes from and the differences with other properties.

In a recent commit, I find that the "quality" is attributed to the result of kilosort, which is wrong. Kilosort provides "KSLabel", which is different from noise template results. 
Laohusong commented 1 month ago

Another thing worth mentioning is that I reviewed the codes based on a version 11 days before, which falls behind 80 commits now. Please take care before merging.

rcpeene commented 4 weeks ago

What's worse, I don't know how long it will take and how much I have already downloaded.

We've had this issue open for awhile. Perhaps worth revisiting https://github.com/dandi/dandi-cli/issues/1159

I haven't found the reason. Maybe it is related to the nwb format itself because when I tested our own nwb with the same virtual environment, it performed well.

Yes, we had tried this in the past and not had success. I should make an issue on PyNWB github.

Maybe torch should be added into the requirement list with suggestions to install the gpu version

Will try this

Rendering the displayed ccf widget is cool indeed. But I have to point that the rendered probe locations are completely wrong!

This is a known problem with our older data. As we didn't have any publically available neuropixels data, we had to use dandiset 000021. We should change this now.

rcpeene commented 4 weeks ago

when importing cebra, it raises error that torch is not installed. I completely followed the environment setup instructions on the PC

I'm not able to replicate this on my machine. Was CEBRA properly installed?

Laohusong commented 4 weeks ago

We've had this issue open for awhile. Perhaps worth revisiting https://github.com/dandi/dandi-cli/issues/1159`

I am very happy to see you used to request for a loading bar. High five! It will be the best solution that dandi cli itself supports loading bar. If dandi cli doesn't, it may be worth changing the downloading functions in the dandi_utils, I think.

This is a known problem with our older data. As we didn't have any publically available neuropixels data, we had to use dandiset 000021. We should change this now.

So it is the problem with the old data. It is also a choice to just tell users the problem if newer and correct data is not available.

I'm not able to replicate this on my machine. Was CEBRA properly installed?

CEBRA is OK because after I simply installed torch in the env, the notebook worked. Has your PC installed torch in the outer environment? Anyway, the gpu version torch should be guided to install in advance. So the problem doesn't matter after that.

rcpeene commented 2 weeks ago

I have swapped out the dandiset used for the probe visualization notebook and added torch to the requirements.txt on the latest dev branch.

rcpeene commented 21 hours ago

Any additional feedback @Laohusong . Looking for permission to close this.

Laohusong commented 16 hours ago

Ok, I have closed it.