Open sarahthon opened 2 months ago
I adjusted the autothreshold level to 10 to include all channels, and now it works. But I don't think including more noise in a few recordings is a good solution
@alejoe91 Are channel IDs counted starting from 0 or 1? If they start from 1, this recording has three bad channels on one tetrode (CH25, CH26, CH27). Would having just one functional channel pose issues for the PCA? Additionally, does the sparse=True
keyword argument matter in this context?
Hi @nicolossus
Yeah they should start from 0, but you can double check from the settings.xml
. If only one channel is left after bad channel removal, the first issue will be spike sorting, since it will be tricky to isolate units.
In another comment (that I'm not finding) you asked about switching to 0.101.1. This would be highly recommended from my side and it shouldn't take too long. I believe this specific issue is also solved with additional protections :) If you have time to spearhead this, I can help out in case of need or with a couple of meetings when needed!
The main change is in the postprocessing
module, and you can find a guide to upgrade here.
@alejoe91 Thanks! I have some time the coming weeks, so I can spearhead the transition. I'll let you know if there are problems :)
Hi again, I have also run into another problem with spikesorting, which do not occur with all actions but a few. Is this relatable to the resolved issue #82
Processing 022-200322-6
Cleaning up existing NWB file
Preprocessing recording: Num channels: 32 Duration: 927.56 s Detected bad channels: ['CH1' 'CH4' 'CH25' 'CH26' 'CH27' 'CH29' 'CH32'] Active channels: 25 Saving preprocessed recording
Spike sorting with mountainsort4 using installed sorter Found 46 units! Removed 0 units with less than 3 spikes
Postprocessing Extracting waveforms Computing QC metrics Exporting to phy ERROR: unable to process 022-200322-6 concurrent.futures.process._RemoteTraceback: """ Traceback (most recent call last): File "/projects/ec109/conda-envs/cinpla/lib/python3.11/concurrent/futures/process.py", line 261, in _process_worker r = call_item.fn(*call_item.args, call_item.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/projects/ec109/conda-envs/cinpla/lib/python3.11/concurrent/futures/process.py", line 210, in _process_chunk return [fn(args) for args in chunk] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/projects/ec109/conda-envs/cinpla/lib/python3.11/concurrent/futures/process.py", line 210, in
return [fn( args) for args in chunk]
^^^^^^^^^
File "/projects/ec109/conda-envs/cinpla/lib/python3.11/site-packages/spikeinterface/core/job_tools.py", line 439, in function_wrapper
return _func(segment_index, start_frame, end_frame, _worker_ctx)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/projects/ec109/conda-envs/cinpla/lib/python3.11/site-packages/spikeinterface/postprocessing/principal_component.py", line 661, in _all_pc_extractor_chunk
all_pcs[i, :, c] = pca_model[chan_ind].transform(w)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/projects/ec109/conda-envs/cinpla/lib/python3.11/site-packages/sklearn/utils/_set_output.py", line 295, in wrapped
data_to_wrap = f(self, X, *args, *kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/projects/ec109/conda-envs/cinpla/lib/python3.11/site-packages/sklearn/decomposition/_incremental_pca.py", line 409, in transform
return super().transform(X)
^^^^^^^^^^^^^^^^^^^^
File "/projects/ec109/conda-envs/cinpla/lib/python3.11/site-packages/sklearn/utils/_set_output.py", line 295, in wrapped
data_to_wrap = f(self, X, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/projects/ec109/conda-envs/cinpla/lib/python3.11/site-packages/sklearn/decomposition/_base.py", line 143, in transform
check_is_fitted(self)
File "/projects/ec109/conda-envs/cinpla/lib/python3.11/site-packages/sklearn/utils/validation.py", line 1622, in check_is_fitted
raise NotFittedError(msg % {"name": type(estimator).name})
sklearn.exceptions.NotFittedError: This IncrementalPCA instance is not fitted yet. Call 'fit' with appropriate arguments before using this estimator.
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/fp/projects01/ec109/software/expipe-plugin-cinpla/src/expipe_plugin_cinpla/widgets/process.py", line 300, in on_run process.process_ecephys( File "/fp/projects01/ec109/software/expipe-plugin-cinpla/src/expipe_plugin_cinpla/scripts/process.py", line 281, in process_ecephys sexp.export_to_phy( File "/projects/ec109/conda-envs/cinpla/lib/python3.11/site-packages/spikeinterface/exporters/to_phy.py", line 245, in export_to_phy pc.run_for_all_spikes(output_folder / "pc_features.npy", **job_kwargs) File "/projects/ec109/conda-envs/cinpla/lib/python3.11/site-packages/spikeinterface/postprocessing/principal_component.py", line 373, in run_for_all_spikes processor.run() File "/projects/ec109/conda-envs/cinpla/lib/python3.11/site-packages/spikeinterface/core/job_tools.py", line 401, in run for res in results: File "/projects/ec109/conda-envs/cinpla/lib/python3.11/concurrent/futures/process.py", line 620, in _chain_from_iterable_of_lists for element in iterable: File "/projects/ec109/conda-envs/cinpla/lib/python3.11/concurrent/futures/_base.py", line 619, in result_iterator yield _result_or_cancel(fs.pop()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/projects/ec109/conda-envs/cinpla/lib/python3.11/concurrent/futures/_base.py", line 317, in _result_or_cancel return fut.result(timeout) ^^^^^^^^^^^^^^^^^^^ File "/projects/ec109/conda-envs/cinpla/lib/python3.11/concurrent/futures/_base.py", line 456, in result return self.get_result() ^^^^^^^^^^^^^^^^^^^ File "/projects/ec109/conda-envs/cinpla/lib/python3.11/concurrent/futures/_base.py", line 401, in get_result raise self._exception sklearn.exceptions.NotFittedError: This IncrementalPCA instance is not fitted yet. Call 'fit' with appropriate arguments before using this estimator.