magland / sortingview

Web app for viewing results of ephys spike sorting
Apache License 2.0
23 stars 7 forks source link

Error creating spikesortingview #191

Closed khl02007 closed 2 years ago

khl02007 commented 2 years ago

@magland

Trying the new kachery-cloud / sortingview v2 system with multiuser mode

Recording is a BinaryRecordingExtractor and sorting is a NpzSortingExtractor.

Error:

Computing sha1 of /stelmo/nwb/recording/fig20211108_.nwb_raw data valid times no premaze no home_4_franklab_tetrode_hippocampus/traces_cached_seg0.raw
Computing sha1 of /stelmo/nwb/recording/fig20211108_.nwb_raw data valid times no premaze no home_4_franklab_tetrode_hippocampus/traces_cached_seg1.raw
Computing sha1 of /stelmo/nwb/recording/fig20211108_.nwb_raw data valid times no premaze no home_4_franklab_tetrode_hippocampus/traces_cached_seg2.raw
Computing sha1 of /stelmo/nwb/recording/fig20211108_.nwb_raw data valid times no premaze no home_4_franklab_tetrode_hippocampus/traces_cached_seg3.raw
Computing sha1 of /stelmo/nwb/recording/fig20211108_.nwb_raw data valid times no premaze no home_4_franklab_tetrode_hippocampus/traces_cached_seg4.raw
Computing sha1 of /stelmo/nwb/recording/fig20211108_.nwb_raw data valid times no premaze no home_4_franklab_tetrode_hippocampus/traces_cached_seg5.raw
Computing sha1 of /stelmo/nwb/recording/fig20211108_.nwb_raw data valid times no premaze no home_4_franklab_tetrode_hippocampus/traces_cached_seg6.raw
Computing sha1 of /stelmo/nwb/recording/fig20211108_.nwb_raw data valid times no premaze no home_4_franklab_tetrode_hippocampus/traces_cached_seg7.raw
Computing sha1 of /stelmo/nwb/recording/fig20211108_.nwb_raw data valid times no premaze no home_4_franklab_tetrode_hippocampus/traces_cached_seg8.raw
Adding recording: R-00feee7bfadb
Computing sha1 of /stelmo/nwb/sorting/fig20211108_.nwb_raw data valid times no premaze no home_4_franklab_tetrode_hippocampus_1bf9d93e_spikesorting/sorting_cached.npz
Adding sorting: S-54c0496371ba
Workspace URI: sortingview-workspace:jlemwfwjojnh?label=franklab_test
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Input In [24], in <cell line: 14>()
     11 W.set_sorting_curation_authorized_users(sorting_id=sorting_id, user_ids=['kyuhyun9056@gmail.com'])
     13 # Prepare a visualization and print the figURL
---> 14 url2 = W.spikesortingview(recording_id=recording_id, sorting_id=sorting_id, label='Franklab test workspace')
     15 print(url2)

File ~/repos/sortingview/sortingview/workspace/_spikesortingview.py:7, in spikesortingview(self, recording_id, sorting_id, label)
      5 def spikesortingview(self, *, recording_id: str, sorting_id: str, label: str):
      6     R = self.get_recording_extractor(recording_id)
----> 7     S = self.get_sorting_extractor(sorting_id)
      9     curation_feed: Union[kcl.Feed, None] = self.get_curation_feed_for_sorting(sorting_id)
     10     curation_feed_uri: Union[str, None] = curation_feed.uri if curation_feed is not None else None

File ~/repos/sortingview/sortingview/workspace/Workspace.py:202, in Workspace.get_sorting_extractor(self, sorting_id)
    200 s = self.get_sorting_record(sorting_id)
    201 sorting_object = s['sortingObject']
--> 202 return load_sorting_extractor(sorting_object)

File ~/repos/sortingview/sortingview/load_extractors/load_sorting_extractor.py:35, in load_sorting_extractor(sorting_object)
     33 elif sorting_format == 'npz':
     34     npz_file_uri: str = data['npz_file_uri']
---> 35     npz_file_path = kcl.load_bytes(npz_file_uri)
     36     if npz_file_path is None:
     37         raise Exception(f'Unable to load npz file: {npz_file_uri}')

TypeError: load_bytes() missing 2 required positional arguments: 'start' and 'end'
magland commented 2 years ago

@khl02007 please try again now after pulling latest on v2 branch.

khl02007 commented 2 years ago

Seems like it cannot yet handle multi segment recordings

Adding recording: R-ec472a7b159c
Adding sorting: S-0828afc3406d
Workspace URI: sortingview-workspace:efojmutvzofn?label=franklab_test
Preparing spikesortingview data
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Input In [18], in <cell line: 14>()
     11 W.set_sorting_curation_authorized_users(sorting_id=sorting_id, user_ids=['kyuhyun9056@gmail.com'])
     13 # Prepare a visualization and print the figURL
---> 14 url2 = W.spikesortingview(recording_id=recording_id, sorting_id=sorting_id, label='Franklab test workspace')
     15 print(url2)

File ~/repos/sortingview/sortingview/workspace/_spikesortingview.py:13, in spikesortingview(self, recording_id, sorting_id, label)
     10 curation_feed_uri: Union[str, None] = curation_feed.uri if curation_feed is not None else None
     12 print('Preparing spikesortingview data')
---> 13 X = SpikeSortingView.create(
     14     recording=R,
     15     sorting=S,
     16     segment_duration_sec=60 * 20,
     17     snippet_len=(20, 20),
     18     max_num_snippets_per_segment=100,
     19     channel_neighborhood_size=7
     20 )
     22 unit_metrics = self.get_unit_metrics_for_sorting(sorting_id)
     24 f1 = X.create_summary()

File ~/repos/sortingview/sortingview/SpikeSortingView/SpikeSortingView.py:38, in SpikeSortingView.create(recording, sorting, segment_duration_sec, snippet_len, max_num_snippets_per_segment, channel_neighborhood_size)
     29 @staticmethod
     30 def create(*,
     31     recording: si.BaseRecording,
   (...)
     36     channel_neighborhood_size: int
     37 ):
---> 38     data_uri = prepare_spikesortingview_data(
     39         recording=recording,
     40         sorting=sorting,
     41         segment_duration_sec=segment_duration_sec,
     42         snippet_len=snippet_len,
     43         max_num_snippets_per_segment=max_num_snippets_per_segment,
     44         channel_neighborhood_size=channel_neighborhood_size
     45     )
     46     return SpikeSortingView(data_uri)

File ~/repos/sortingview/sortingview/SpikeSortingView/prepare_spikesortingview_data.py:39, in prepare_spikesortingview_data(recording, sorting, segment_duration_sec, snippet_len, max_num_snippets_per_segment, channel_neighborhood_size)
     37 channel_ids = np.array(recording.get_channel_ids()).astype(np.int32)
     38 sampling_frequency = recording.get_sampling_frequency()
---> 39 num_frames = recording.get_num_frames()
     40 num_frames_per_segment = math.ceil(segment_duration_sec * sampling_frequency)
     41 num_segments = math.ceil(num_frames / num_frames_per_segment)

File ~/repos/spikeinterface/spikeinterface/core/baserecording.py:76, in BaseRecording.get_num_samples(self, segment_index)
     75 def get_num_samples(self, segment_index=None):
---> 76     segment_index = self._check_segment_index(segment_index)
     77     return self._recording_segments[segment_index].get_num_samples()

File ~/repos/spikeinterface/spikeinterface/core/base.py:65, in BaseExtractor._check_segment_index(self, segment_index)
     63         return 0
     64     else:
---> 65         raise ValueError("Multi-segment object. Provide 'segment_index'")
     66 else:
     67     return segment_index

ValueError: Multi-segment object. Provide 'segment_index'
khl02007 commented 2 years ago

I tried concatenating with si.concatenate([recording]) but ConcatenateSegmentRecording is not supported

---------------------------------------------------------------------------
Exception                                 Traceback (most recent call last)
Input In [20], in <cell line: 2>()
      1 W: sv.Workspace = sv.create_workspace(label='franklab_test')
----> 2 recording_id = W.add_recording(label='recording1', recording=recording)
      3 sorting_id = W.add_sorting(recording_id=recording_id, label='true', sorting=sorting)
      5 # Print the workspace URI for loading at a later time
      6 # You may want to store this in a database

File ~/repos/sortingview/sortingview/workspace/Workspace.py:52, in Workspace.add_recording(self, label, recording)
     51 def add_recording(self, *, label: str, recording: si.BaseRecording):
---> 52     recording_object = get_recording_object(recording)
     53     recording_id = 'R-' + _random_id()
     54     if recording_id in self._recording_records:

File ~/repos/sortingview/sortingview/load_extractors/get_recording_object.py:37, in get_recording_object(recording)
     32     recording_object = {
     33         'recording_format': 'BinaryRecordingExtractor',
     34         'data': data
     35     }
     36 else:
---> 37     raise Exception('Unable to create sortingview object from recording')
     38 setattr(recording, 'sortingview_object', recording_object)
     39 return recording_object

Exception: Unable to create sortingview object from recording
khl02007 commented 2 years ago

I then saved the concatenated recording and reloaded it which became a binary recording. then I tried again and got this

Computing sha1 of /stelmo/kyu/test_rec/traces_cached_seg0.raw
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Input In [16], in <cell line: 2>()
      1 W: sv.Workspace = sv.create_workspace(label='franklab_test')
----> 2 recording_id = W.add_recording(label='recording1', recording=recording)
      3 sorting_id = W.add_sorting(recording_id=recording_id, label='true', sorting=sorting)
      5 # Print the workspace URI for loading at a later time
      6 # You may want to store this in a database

File ~/repos/sortingview/sortingview/workspace/Workspace.py:59, in Workspace.add_recording(self, label, recording)
     54 if recording_id in self._recording_records:
     55     raise Exception(f'Duplicate recording ID: {recording_id}')
     56 x = {
     57     'recordingId': recording_id,
     58     'recordingLabel': label,
---> 59     'recordingPath': kcl.store_json(recording_object, label=f'{label}.json'),
     60     'recordingObject': recording_object,
     61     'description': f'Imported from Python: {label}'
     62 }
     63 print(f'Adding recording: {recording_id}')
     64 self._feed.append_message({
     65     'action': {
     66         'type': 'ADD_RECORDING',
     67         'recording': recording_object
     68     }
     69 })

File ~/miniconda3/envs/spyglass/lib/python3.8/site-packages/kachery_cloud/core.py:18, in store_json(x, separators, indent, label, cache_locally)
     16 def store_json(x: Any, *, separators=(',', ':'), indent=None, label: Union[str, None]=None, cache_locally: bool=False) -> str:
     17     import simplejson
---> 18     text = simplejson.dumps(x, separators=separators, indent=indent, allow_nan=False)
     19     return store_text(text, label=label, cache_locally=cache_locally)

File ~/miniconda3/envs/spyglass/lib/python3.8/site-packages/simplejson/__init__.py:398, in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, encoding, default, use_decimal, namedtuple_as_object, tuple_as_array, bigint_as_string, sort_keys, item_sort_key, for_json, ignore_nan, int_as_string_bitcount, iterable_as_array, **kw)
    396 if cls is None:
    397     cls = JSONEncoder
--> 398 return cls(
    399     skipkeys=skipkeys, ensure_ascii=ensure_ascii,
    400     check_circular=check_circular, allow_nan=allow_nan, indent=indent,
    401     separators=separators, encoding=encoding, default=default,
    402     use_decimal=use_decimal,
    403     namedtuple_as_object=namedtuple_as_object,
    404     tuple_as_array=tuple_as_array,
    405     iterable_as_array=iterable_as_array,
    406     bigint_as_string=bigint_as_string,
    407     sort_keys=sort_keys,
    408     item_sort_key=item_sort_key,
    409     for_json=for_json,
    410     ignore_nan=ignore_nan,
    411     int_as_string_bitcount=int_as_string_bitcount,
    412     **kw).encode(obj)

File ~/miniconda3/envs/spyglass/lib/python3.8/site-packages/simplejson/encoder.py:296, in JSONEncoder.encode(self, o)
    292         return encode_basestring(o)
    293 # This doesn't pass the iterator directly to ''.join() because the
    294 # exceptions aren't as detailed.  The list call should be roughly
    295 # equivalent to the PySequence_Fast that ''.join() would do.
--> 296 chunks = self.iterencode(o, _one_shot=True)
    297 if not isinstance(chunks, (list, tuple)):
    298     chunks = list(chunks)

File ~/miniconda3/envs/spyglass/lib/python3.8/site-packages/simplejson/encoder.py:378, in JSONEncoder.iterencode(self, o, _one_shot)
    369     _iterencode = _make_iterencode(
    370         markers, self.default, _encoder, self.indent, floatstr,
    371         self.key_separator, self.item_separator, self.sort_keys,
   (...)
    375         self.item_sort_key, self.encoding, self.for_json,
    376         self.iterable_as_array, Decimal=decimal.Decimal)
    377 try:
--> 378     return _iterencode(o, 0)
    379 finally:
    380     key_memo.clear()

File ~/miniconda3/envs/spyglass/lib/python3.8/site-packages/simplejson/encoder.py:272, in JSONEncoder.default(self, o)
    254 def default(self, o):
    255     """Implement this method in a subclass such that it returns
    256     a serializable object for ``o``, or calls the base implementation
    257     (to raise a ``TypeError``).
   (...)
    270 
    271     """
--> 272     raise TypeError('Object of type %s is not JSON serializable' %
    273                     o.__class__.__name__)

TypeError: Object of type ndarray is not JSON serializable
magland commented 2 years ago

Seems like it cannot yet handle multi segment recordings

Thanks, I'll add support for multi-segment recordings.

@khl02007

magland commented 2 years ago

@khl02007 please try now. We still don't support multi-segment recordings. However, we now support ConcatenateSegmentRecording, and I think the BinaryRecordingExtractor code should now be fixed.

khl02007 commented 2 years ago

@magland

When I pass in ConcatenateSegmentRecording, I get the following. The sorting has 14 units and all of them have nonzero number of spikes.

Adding recording: R-381b6bda938f
Adding sorting: S-42b0f09d4efd
Workspace URI: sortingview-workspace:nukrfijutemr?label=franklab_test
Preparing spikesortingview data

---------------------------------------------------------------------------
Exception                                 Traceback (most recent call last)
Input In [25], in <cell line: 14>()
     11 W.set_sorting_curation_authorized_users(sorting_id=sorting_id, user_ids=['kyuhyun9056@gmail.com'])
     13 # Prepare a visualization and print the figURL
---> 14 url2 = W.spikesortingview(recording_id=recording_id, sorting_id=sorting_id, label='Franklab test workspace')
     15 print(url2)

File ~/repos/sortingview/sortingview/workspace/_spikesortingview.py:13, in spikesortingview(self, recording_id, sorting_id, label)
     10 curation_feed_uri: Union[str, None] = curation_feed.uri if curation_feed is not None else None
     12 print('Preparing spikesortingview data')
---> 13 X = SpikeSortingView.create(
     14     recording=R,
     15     sorting=S,
     16     segment_duration_sec=60 * 20,
     17     snippet_len=(20, 20),
     18     max_num_snippets_per_segment=100,
     19     channel_neighborhood_size=7
     20 )
     22 unit_metrics = self.get_unit_metrics_for_sorting(sorting_id)
     24 f1 = X.create_summary()

File ~/repos/sortingview/sortingview/SpikeSortingView/SpikeSortingView.py:38, in SpikeSortingView.create(recording, sorting, segment_duration_sec, snippet_len, max_num_snippets_per_segment, channel_neighborhood_size)
     29 @staticmethod
     30 def create(*,
     31     recording: si.BaseRecording,
   (...)
     36     channel_neighborhood_size: int
     37 ):
---> 38     data_uri = prepare_spikesortingview_data(
     39         recording=recording,
     40         sorting=sorting,
     41         segment_duration_sec=segment_duration_sec,
     42         snippet_len=snippet_len,
     43         max_num_snippets_per_segment=max_num_snippets_per_segment,
     44         channel_neighborhood_size=channel_neighborhood_size
     45     )
     46     return SpikeSortingView(data_uri)

File ~/repos/sortingview/sortingview/SpikeSortingView/prepare_spikesortingview_data.py:100, in prepare_spikesortingview_data(recording, sorting, segment_duration_sec, snippet_len, max_num_snippets_per_segment, channel_neighborhood_size)
     98     peak_channel_id = fallback_unit_peak_channel_ids.get(str(unit_id), None)
     99 if peak_channel_id is None:
--> 100     raise Exception(f'Peak channel not found for unit {unit_id}. This is probably because no spikes were found in any segment for this unit.')
    101 channel_neighborhood = unit_channel_neighborhoods[str(unit_id)]
    102 f.create_dataset(f'unit/{unit_id}/peak_channel_id', data=np.array([peak_channel_id]).astype(np.int32))

Exception: Peak channel not found for unit 1. This is probably because no spikes were found in any segment for this unit.
khl02007 commented 2 years ago

@magland

When I pass a binary recording with a single segment, I get the following

Adding recording: R-4105a5feb4fb
Adding sorting: S-8a4ae1075caf
Workspace URI: sortingview-workspace:mwfqoahozwkd?label=franklab_test
Preparing spikesortingview data

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Input In [43], in <cell line: 14>()
     11 W.set_sorting_curation_authorized_users(sorting_id=sorting_id, user_ids=['kyuhyun9056@gmail.com'])
     13 # Prepare a visualization and print the figURL
---> 14 url2 = W.spikesortingview(recording_id=recording_id, sorting_id=sorting_id, label='Franklab test workspace')
     15 print(url2)

File ~/repos/sortingview/sortingview/workspace/_spikesortingview.py:13, in spikesortingview(self, recording_id, sorting_id, label)
     10 curation_feed_uri: Union[str, None] = curation_feed.uri if curation_feed is not None else None
     12 print('Preparing spikesortingview data')
---> 13 X = SpikeSortingView.create(
     14     recording=R,
     15     sorting=S,
     16     segment_duration_sec=60 * 20,
     17     snippet_len=(20, 20),
     18     max_num_snippets_per_segment=100,
     19     channel_neighborhood_size=7
     20 )
     22 unit_metrics = self.get_unit_metrics_for_sorting(sorting_id)
     24 f1 = X.create_summary()

File ~/repos/sortingview/sortingview/SpikeSortingView/SpikeSortingView.py:38, in SpikeSortingView.create(recording, sorting, segment_duration_sec, snippet_len, max_num_snippets_per_segment, channel_neighborhood_size)
     29 @staticmethod
     30 def create(*,
     31     recording: si.BaseRecording,
   (...)
     36     channel_neighborhood_size: int
     37 ):
---> 38     data_uri = prepare_spikesortingview_data(
     39         recording=recording,
     40         sorting=sorting,
     41         segment_duration_sec=segment_duration_sec,
     42         snippet_len=snippet_len,
     43         max_num_snippets_per_segment=max_num_snippets_per_segment,
     44         channel_neighborhood_size=channel_neighborhood_size
     45     )
     46     return SpikeSortingView(data_uri)

File ~/repos/sortingview/sortingview/SpikeSortingView/prepare_spikesortingview_data.py:39, in prepare_spikesortingview_data(recording, sorting, segment_duration_sec, snippet_len, max_num_snippets_per_segment, channel_neighborhood_size)
     37 channel_ids = np.array(recording.get_channel_ids()).astype(np.int32)
     38 sampling_frequency = recording.get_sampling_frequency()
---> 39 num_frames = recording.get_num_frames()
     40 num_frames_per_segment = math.ceil(segment_duration_sec * sampling_frequency)
     41 num_segments = math.ceil(num_frames / num_frames_per_segment)

File ~/repos/spikeinterface/spikeinterface/core/baserecording.py:76, in BaseRecording.get_num_samples(self, segment_index)
     75 def get_num_samples(self, segment_index=None):
---> 76     segment_index = self._check_segment_index(segment_index)
     77     return self._recording_segments[segment_index].get_num_samples()

File ~/repos/spikeinterface/spikeinterface/core/base.py:65, in BaseExtractor._check_segment_index(self, segment_index)
     63         return 0
     64     else:
---> 65         raise ValueError("Multi-segment object. Provide 'segment_index'")
     66 else:
     67     return segment_index

ValueError: Multi-segment object. Provide 'segment_index'