Closed wendtalexander closed 10 months ago
Spyking circus 2 should be able to work with a single channel electrode. You have a proper probe attached to the recording? Even a dummy 1D probe:
import probeinterface as pb probe = pb.generate_linear_probe(num_channels=2) probe.set_device_channel_indices([0, 1]) recording.set_probe(probe)
You can edit/play with the positions of the channels and/or create 2 separate shanks. And then, after that, you should be able to launch the sorters
I forgot to attach a probe interface, but despite that, I ran into trouble! I tried the code you suggested @yger, but I ran into this RuntimeError
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
The second thing I tried was to get the spike sorting working only with one channel, like this:
probe1 = pi.generate_linear_probe(num_elec=1)
probe1.set_device_channel_indices([0])
recording_one_channel = recording.set_probe(probe1)
sorter = ss.run_sorter("spykingcircus2", recording_one_channel, remove_existing_foler=True)
the output was the same RuntimeError, but I got some RuntimeWarnings that spikeinterface encountered. Some examples are shown here in the console output.
preprocessing\normalize_scale.py:289: RuntimeWarning: divide by zero encountered in divide
gain = 1 / mads
preprocessing\normalize_scale.py:290: RuntimeWarning: invalid value encountered in divide
offset = -medians / mads
preprocessing\normalize_scale.py:22: RuntimeWarning: invalid value encountered in multiply
scaled_traces = traces * self.gain[:, channel_indices] + self.offset[:, channel_indices]
I also checked the code for the peak detection in the sorters/internal/spyking_circus2.py (line 89) and it uses as method "locally_exclusive" with no obvious way to change it (by providing params in the _default_params), so that the peak detection is done only on one channel.
Indeed, if you have only 1 channel, substracting the median leads you to a vector of 0 I guess.... I'll make a patch in SC to detect this automatically. Meanwhile, you should preprocess your recording yourself, and set apply_preprocessing to False in the parameter.
rec = si.bandpass_filter(recording, freq_min=150, dtype='float32') sorting = si.run_sorter("spykingcircus2", rec, apply_preprocessing=False)
I tried with the apply_preprocessing set to false but got an ValueError in the clustering. The console output is:
File "C:\Users\awendt\.pyenv-win-venv\envs\playback\lib\site-packages\spikeinterface\sorters\basesorter.py", line 254, in run_from_folder
SorterClass._run_from_folder(sorter_output_folder, sorter_params, verbose)
File "C:\Users\awendt\.pyenv-win-venv\envs\playback\lib\site-packages\spikeinterface\sorters\internal\spyking_circus2.py", line 112, in _run_from_folder
labels, peak_labels = find_cluster_from_peaks(
File "C:\Users\awendt\.pyenv-win-venv\envs\playback\lib\site-packages\spikeinterface\sortingcomponents\clustering\main.py", line 42, in find_cluster_from_peaks
labels, peak_labels = method_class.main_function(recording, peaks, params)
File "C:\Users\awendt\.pyenv-win-venv\envs\playback\lib\site-packages\spikeinterface\sortingcomponents\clustering\random_projections.py", line 153, in main_function
clustering = hdbscan.hdbscan(hdbscan_data, **d["hdbscan_kwargs"])
File "C:\Users\awendt\.pyenv-win-venv\envs\playback\lib\site-packages\hdbscan\hdbscan_.py", line 837, in hdbscan
(single_linkage_tree, result_min_span_tree) = memory.cache(
File "C:\Users\awendt\.pyenv-win-venv\envs\playback\lib\site-packages\joblib\memory.py", line 353, in __call__
return self.func(*args, **kwargs)
File "C:\Users\awendt\.pyenv-win-venv\envs\playback\lib\site-packages\hdbscan\hdbscan_.py", line 339, in _hdbscan_boruvka_kdtree
tree = KDTree(X, metric=metric, leaf_size=leaf_size, **kwargs)
File "sklearn\neighbors\_binary_tree.pxi", line 826, in sklearn.neighbors._kd_tree.BinaryTree.__init__
File "C:\Users\awendt\.pyenv-win-venv\envs\playback\lib\site-packages\sklearn\utils\validation.py", line 957, in check_array
_assert_all_finite(
File "C:\Users\awendt\.pyenv-win-venv\envs\playback\lib\site-packages\sklearn\utils\validation.py", line 122, in _assert_all_finite
_assert_all_finite_element_wise(
File "C:\Users\awendt\.pyenv-win-venv\envs\playback\lib\site-packages\sklearn\utils\validation.py", line 171, in _assert_all_finite_element_wise
raise ValueError(msg_err)
ValueError: Input contains NaN.
I also set the n_jobs params to 1 to fix the RuntimeError
params = ss.get_default_sorter_params("spykingcircus2")
params["apply_preprocessing"] = False
params["job_kwargs"] = dict(n_jobs=1)
Can you share the recording with me such that I could test the SC2 pipeline on it?
how can I share the data with you? I cant do it over Github!
WeTransfer ?
Apparently I can't share the data, because of security reasons... but I'm trying out the sorting components now! Thank you for always fast responses!
Regarding the sorting algorithms implemented in spikeinterface, I'm running currently into problems while executing 'spykingcircus2' or 'tridesclous2'. The error message is: "Exception: There are no channel locations", and I don't know how to fix this, because the recording is done only with two separate electrodes. All the documentation is with 32 channels or more... is there a sorting algorithm that does not calculate peaks from multiple channels? Or is there a keyword argument I missed that I can provide to solve the issue?