Closed paul-aparicio closed 1 week ago
In main now, regularization of the whitening is turned off by default in SC2 so the bug should be gone
Thanks! It was able to work to completion now. A warning message I am puzzled about that kept showing up was the following:
/media/paul/storage/pdev/spikeinterface/src/spikeinterface/core/job_tools.py:103: UserWarning:
n_jobs
is not set so parallel processing is disabled! To speed up computations, it is recommended to set n_jobs either globally (with thespikeinterface.set_global_job_kwargs()
function) or locally (with then_jobs
argument). Usespikeinterface.set_global_job_kwargs?
for more information about job_kwargs. warnings.warn(
It looks like the parameter has a default setting for n_jobs
, so I don't understand the message. I tried setting n_jobs to different values in the parameters and also tried setting the global variable as suggested in the message, but I still got the same warning? Thanks again for your help.
Yes, sorry, this will be fixed soon. Internally, there is one step in SC2 (after clustering), where the n_jobs arguments is not properly propagated and set to 1. I'll fix that, but this is not drastically harming the speed since this step is not the main bottleneck
Ok, Thanks!
Hello, I am running spikeinterface v0.101.2 and attempting to sort data from a 96 channel blackrock utah probe. Since the contacts on each probe are ~400um apart, I attempting to sort by grouping property (where each shank of the probe is a group):
It would appear that it is trying to whiten the data. I get the following error:
Is there a way to shut off whitening? I do not see it as a sorter parameter.
I also tried to just run it as:
I get a different type of error SpikeSortingError: Spike sorting error trace: Traceback (most recent call last): File "/home/paul/anaconda3/envs/sin/lib/python3.11/site-packages/spikeinterface/sorters/basesorter.py", line 261, in run_from_folder SorterClass._run_from_folder(sorter_output_folder, sorter_params, verbose) File "/home/paul/anaconda3/envs/sin/lib/python3.11/site-packages/spikeinterface/sorters/internal/spyking_circus2.py", line 229, in _run_from_folder labels, peak_labels = find_cluster_from_peaks( ^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/paul/anaconda3/envs/sin/lib/python3.11/site-packages/spikeinterface/sortingcomponents/clustering/main.py", line 44, in find_cluster_from_peaks outputs = method_class.main_function(recording, peaks, params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/paul/anaconda3/envs/sin/lib/python3.11/site-packages/spikeinterface/sortingcomponents/clustering/circus.py", line 245, in main_function probe=recording.get_probe(), ^^^^^^^^^^^^^^^^^^^^^ File "/home/paul/anaconda3/envs/sin/lib/python3.11/site-packages/spikeinterface/core/baserecordingsnippets.py", line 255, in get_probe assert len(probes) == 1, "there are several probe use .get_probes() or get_probegroup()" ^^^^^^^^^^^^^^^^ AssertionError: there are several probe use .get_probes() or get_probegroup()
Spike sorting failed. You can inspect the runtime trace in /media/paul/Test/TY20210401_640_signalCheck_morning/processed/results_sc/spikeinterface_log.json. [ ]:
Thank you for any help!