SpikeInterface / spikeinterface

A Python-based module for creating flexible and robust spike sorting pipelines.
https://spikeinterface.readthedocs.io
MIT License
528 stars 187 forks source link

sorting failed #3411

Open ohkoibln opened 2 months ago

ohkoibln commented 2 months ago

Traceback (most recent call last): File "D:\sjwlab\bxy\spikeinterface-main\examples\get_started\quickstart.py", line 64, in sorting_KS2 = ss.run_sorter(sorter_name="kilosort2", recording=recording_preprocessed, extra_requirements=["numpy==1.26"], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\sjwlab\bxy\spikeinterface-main\examples\get_started\myenv\Lib\site-packages\spikeinterface\sorters\runsorter.py", line 210, in run_sorter return run_sorter_container( ^^^^^^^^^^^^^^^^^^^^^ File "D:\sjwlab\bxy\spikeinterface-main\examples\get_started\myenv\Lib\site-packages\spikeinterface\sorters\runsorter.py", line 663, in run_sorter_container raise SpikeSortingError(f"Spike sorting in {mode} failed with the following error:\n{run_sorter_output}") spikeinterface.sorters.utils.misc.SpikeSortingError: Spike sorting in docker failed with the following error: /sjwlab/bxy/spikeinterface-main/examples/get_started/in_container_sorter_script.py:23: DeprecationWarning: output_folder is deprecated and will be removed in version 0.103.0 Please use folder instead sorting = run_sorter_local( RUNNING SHELL SCRIPT: /sjwlab/bxy/spikeinterface-main/examples/get_started/kilosort2_output/sorter_output/run_kilosort2.sh Warning: X does not support locale C.UTF-8

Time 0s. Determining good channels..

found 840 threshold crossings in 307.29 seconds of data

found 511 bad channels

Time 35s. Computing whitening matrix..

Getting channel whitening matrix...

Channel-whitening matrix computed.

Time 36s. Loading raw data and applying filters...

Time 52s. Finished preprocessing 94 batches.

random seed for clusterSingleBatches: 1

----------------------------------------Error using indexing

Subscript indices must either be real positive integers or logicals. Error running kilosort2 Traceback (most recent call last): File "/sjwlab/bxy/spikeinterface-main/examples/get_started/in_container_sorter_script.py", line 23, in sorting = run_sorter_local( File "/root/.local/lib/python3.9/site-packages/spikeinterface/sorters/runsorter.py", line 276, in run_sorter_local SorterClass.run_from_folder(folder, raise_error, verbose) File "/root/.local/lib/python3.9/site-packages/spikeinterface/sorters/basesorter.py", line 301, in run_from_folder raise SpikeSortingError( spikeinterface.sorters.utils.misc.SpikeSortingError: Spike sorting error trace: Traceback (most recent call last): File "/root/.local/lib/python3.9/site-packages/spikeinterface/sorters/basesorter.py", line 261, in run_from_folder SorterClass._run_from_folder(sorter_output_folder, sorter_params, verbose) File "/root/.local/lib/python3.9/site-packages/spikeinterface/sorters/external/kilosortbase.py", line 217, in _run_from_folder raise Exception(f"{cls.sorter_name} returned a non-zero exit code") Exception: kilosort2 returned a non-zero exit code

Spike sorting failed. You can inspect the runtime trace in /sjwlab/bxy/spikeinterface-main/examples/get_started/kilosort2_output/spikeinterface_log.json.

my code is:

global_job_kwargs = dict(n_jobs=1, chunk_duration="1s") si.set_global_job_kwargs(**global_job_kwargs)

data_location = "D:\sjwlab\0914\" data_name = "data.raw.h5" recording = se.read_maxwell(data_location + data_name) print("Recordings") print(recording)

channel_ids = recording.get_channel_ids() fs = recording.get_sampling_frequency() num_chan = recording.get_num_channels() num_seg = recording.get_num_segments()

num_seg = recording.get_num_segments()

probe = recording.get_probe() print(probe) recording_cmr = recording

recording_f = si.bandpass_filter(recording, freq_min=300, freq_max=6000) print(recording_f) recording_cmr = si.common_reference(recording_f, reference='global', operator='median') print(recording_cmr)

this computes and saves the recording after applying the preprocessing chain

recording_preprocessed = recording_cmr.save(format='binary') print(recording_preprocessed) print('Available sorters', ss.available_sorters()) print('Installed sorters', ss.installed_sorters())

sorting_KS2 = ss.run_sorter(sorter_name="kilosort2", recording=recording_preprocessed, extra_requirements=["numpy==1.26"], docker_image=True, verbose=True)

zm711 commented 2 months ago

This seems like a Kilosort error. Are you able to at least try this dataset on a local copy of Kilosort2? That could help us diagnose it is a spikeinterface issue. Alternatively you could try sorting with a non Matlab sorter (for example tridesclous2 or SC2 or MS5) and we could see if this is an issue with the dataset itself.

DaohanZhang commented 1 month ago

I guess this error has been reported in https://github.com/SpikeInterface/spikeinterface/issues/2400 but no clear resolution was posted. Wish this issue could be solved!

DaohanZhang commented 1 month ago

I have tried using a local copy but retrieved the same result:

import numpy as np
import matplotlib.pyplot as plt
import spikeinterface.full as si  # import core only
import spikeinterface.extractors as se
import spikeinterface.preprocessing as spre
import spikeinterface.sorters as ss
import spikeinterface.postprocessing as spost
import spikeinterface.qualitymetrics as sqm
import spikeinterface.comparison as sc
import spikeinterface.exporters as sexp
import spikeinterface.curation as scur
import spikeinterface.widgets as sw
from spikeinterface.sortingcomponents.peak_detection import detect_peaks
from spikeinterface.sortingcomponents.peak_localization import localize_peaks
import sys
sys.path.append('/share/home/zhangdaohan20h/dredge-main/python')
from dredge.dredge_ap import register
from dredge.dredge_lfp import register_online_lfp
# this has some helpers for plotting
import dredge.motion_util as mu
print('#')

cutoff_um = None
lfprec = se.read_spikeglx("/home/zhangdaohan20h/public_data/NPX_examples/Pt01/", load_sync_channel=False, 
                          stream_id="imec0.lf")
# convert to floating point
lfprec = si.astype(lfprec, np.float32)
# ensure depth order
lfprec = si.depth_order(lfprec)
# optional: remove channels outside the brain
# you could use similar logic to extract a single column
# or trim both ends of the probe, whatever you like
cutoff_um = 8000
if cutoff_um is not None:
    geom = lfprec.get_channel_locations()
    lfprec = lfprec.remove_channels(lfprec.channel_ids[geom[:, 1] > cutoff_um])

# bandpass filter
# we do an aggressive one since we plan to downsample
lfprec = si.bandpass_filter(
    lfprec,
    freq_min=0.5,
    freq_max=250,
    margin_ms=1000,
    filter_order=3,
    dtype="float32",
    add_reflect_padding=True,
)
# fancy bad channels detection and removal from the International Brain Lab
bad_chans, labels = si.detect_bad_channels(lfprec, psd_hf_threshold=1.4, num_random_chunks=100, seed=0)
print("Found bad channels", bad_chans)
lfprec = lfprec.remove_channels(bad_chans)
# correct for ADC sample shifts
lfprec = si.phase_shift(lfprec)
# common median reference
lfprec = si.common_reference(lfprec)
# downsample to 250Hz
lfprec = si.resample(lfprec, 250, margin_ms=1000)
# spatial filters: second derivative and averageing same-depth channels
lfprec = si.directional_derivative(lfprec, order=2, edge_order=1)
lfprec = si.average_across_direction(lfprec)

t_start = 235
t_end = 295

lfprec = lfprec.frame_slice(
    start_frame=int(t_start * lfprec.sampling_frequency),
    end_frame=int(t_end * lfprec.sampling_frequency),
)

from dredge.dredge_ap import register
from dredge.dredge_lfp import register_online_lfp
# this has some helpers for plotting
import dredge.motion_util as mu

me_rigid, extra_info_rigid = register_online_lfp(lfprec, max_disp_um=1000)

# Load the recording  
#from spikeinterface.core import BinaryFolderRecording  
#rec = si.read_zarr('recording.zarr')  
rec = se.read_spikeglx("/home/zhangdaohan20h/public_data/NPX_examples/Pt01", 
                        load_sync_channel=False, stream_id="imec0.ap")
rec = si.astype(rec, np.float32)
rec = si.bandpass_filter(rec)
rec = si.common_reference(rec)

rec = rec.frame_slice(
    start_frame=int(t_start * rec.sampling_frequency),
    end_frame=int(t_end * rec.sampling_frequency),
)

#in dredge:
me_rigid_upsampled = mu.resample_to_new_time_bins(me_rigid, rec.get_times(0))
print(len(me_rigid_upsampled.displacement))

from spikeinterface.sortingcomponents.motion import estimate_motion, interpolate_motion
motion_si = mu.motion_estimate_to_spikeinterface_motion(me_rigid_upsampled)
rec = interpolate_motion(rec, motion_si)

# Save the recording  
# rec.save(folder='recording', format='zarr',overwrite=True, engine='joblib', engine_kwargs={"n_jobs": 20},)  #binary_folder

sorting_KS3 = ss.run_sorter(sorter_name="kilosort3", recording=rec, docker_image=False, remove_existing_folder=True, delete_output_folder = False, n_jobs = 20, verbose=True)

My cmd window↓:

[zhangdaohan20h@gpu01 MGH01KS3]$ newgrp docker
[zhangdaohan20h@gpu01 MGH01KS3]$ conda activate kilosort4
(kilosort4) [zhangdaohan20h@gpu01 MGH01KS3]$ python MGH01KS3.py
#
/home/zhangdaohan20h/public_data/NPX_examples/Pt01/Pt01_g0_t0.imec0.ap.meta
/home/zhangdaohan20h/public_data/NPX_examples/Pt01/Pt01_g0_t0.imec0.lf.meta
Found bad channels ['imec0.lf#LF191']
Online chunks [10.0s each]: 100%|███████████████████████████████████████████████████████████████████| 5/5 [00:18<00:00,  3.66s/it]
/home/zhangdaohan20h/public_data/NPX_examples/Pt01/Pt01_g0_t0.imec0.ap.meta
/home/zhangdaohan20h/public_data/NPX_examples/Pt01/Pt01_g0_t0.imec0.lf.meta
1800000
motion.dim
1
write_binary_recording: 100%|███████████████████████████████████████████████████████████████████| 60/60 [1:12:13<00:00, 72.23s/it]
fatal: not a git repository (or any parent up to mount point /)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
Usage: which [options] [--] COMMAND [...]
Write the full path of COMMAND(s) to standard output.

  --version, -[vV] Print version and exit successfully.
  --help,          Print this help and exit successfully.
  --skip-dot       Skip directories in PATH that start with a dot.
  --skip-tilde     Skip directories in PATH that start with a tilde.
  --show-dot       Don't expand a dot to current directory in output.
  --show-tilde     Output a tilde for HOME directory for non-root.
  --tty-only       Stop processing options on the right if not on tty.
  --all, -a        Print all matches in PATH, not just the first
  --read-alias, -i Read list of aliases from stdin.
  --skip-alias     Ignore option --read-alias; don't read stdin.
  --read-functions Read shell functions from stdin.
  --skip-functions Ignore option --read-functions; don't read stdin.

Recommended use is to write the output of (alias; declare -f) to standard
input, so that which can show aliases and shell functions. See which(1) for
examples.

If the options --read-alias and/or --read-functions are specified then the
output can be a full alias or function definition, optionally followed by
the full path of each command used inside of those.

Report bugs to <which-bugs@gnu.org>.
RUNNING SHELL SCRIPT: /share/home/zhangdaohan20h/CODES/NPX/preanalysis/MGH01KS3/kilosort3_output/sorter_output/run_kilosort3.sh

                                                     < M A T L A B (R) >

                                           Copyright 1984-2020 The MathWorks, Inc.

                                           R2020a (9.8.0.1323502) 64-bit (glnxa64)

                                                      February 25, 2020

Warning: Name is nonexistent or not a directory: /tmp/Editor_vxdxq 

To get started, type doc.

For product information, visit www.mathworks.com.

Time   0s. Computing whitening matrix.. 

Getting channel whitening matrix... 

Warning: The CUDA driver must recompile the GPU libraries because your device is more recent than the libraries. Recompiling can

take several minutes. 

> In get_whitening_matrix (line 25)

  In preprocessDataSub (line 62)

  In kilosort3_master (line 71) 

Channel-whitening matrix computed. 

Time 738s. Loading raw data and applying filters... 

Time 759s. Finished preprocessing 28 batches. 

Drift correction ENABLED

vertical pitch size is 20 

horizontal pitch size is 32 

     0    16    32    48

   674

----------------------------------------Undefined function 'spikedetector3' for input arguments of type 'gpuArray'.
Error running kilosort3
Traceback (most recent call last):
  File "/share/home/zhangdaohan20h/CODES/NPX/preanalysis/MGH01KS3/MGH01KS3.py", line 112, in <module>
    sorting_KS3 = ss.run_sorter(sorter_name="kilosort3", recording=rec, docker_image=False, remove_existing_folder=True, delete_output_folder = False, n_jobs = 20, verbose=True)
  File "/home/zhangdaohan20h/.conda/envs/kilosort4/lib/python3.9/site-packages/spikeinterface/sorters/runsorter.py", line 216, in run_sorter
    return run_sorter_local(**common_kwargs)
  File "/home/zhangdaohan20h/.conda/envs/kilosort4/lib/python3.9/site-packages/spikeinterface/sorters/runsorter.py", line 276, in run_sorter_local
    SorterClass.run_from_folder(folder, raise_error, verbose)
  File "/home/zhangdaohan20h/.conda/envs/kilosort4/lib/python3.9/site-packages/spikeinterface/sorters/basesorter.py", line 301, in run_from_folder
    raise SpikeSortingError(
spikeinterface.sorters.utils.misc.SpikeSortingError: Spike sorting error trace:
Traceback (most recent call last):
  File "/home/zhangdaohan20h/.conda/envs/kilosort4/lib/python3.9/site-packages/spikeinterface/sorters/basesorter.py", line 261, in run_from_folder
    SorterClass._run_from_folder(sorter_output_folder, sorter_params, verbose)
  File "/home/zhangdaohan20h/.conda/envs/kilosort4/lib/python3.9/site-packages/spikeinterface/sorters/external/kilosortbase.py", line 217, in _run_from_folder
    raise Exception(f"{cls.sorter_name} returned a non-zero exit code")
Exception: kilosort3 returned a non-zero exit code

Spike sorting failed. You can inspect the runtime trace in /share/home/zhangdaohan20h/CODES/NPX/preanalysis/MGH01KS3/kilosort3_output/spikeinterface_log.json.
alejoe91 commented 1 month ago

Did you compile Kilosort CUDA mex files?

DaohanZhang commented 1 month ago
Warning: The CUDA driver must recompile the GPU libraries because your device is more recent than the libraries. Recompiling can take several minutes. 

Does it mean recompiling is finishing automatedly?

alejoe91 commented 1 month ago

Not sure what you mean..did you follow the Installation instructions here? https://github.com/MouseLand/Kilosort/tree/v2.5.2

DaohanZhang commented 1 month ago

Sorry for the lack of clarity! I mean in my command prompt window, I noticed a message that suggests recompilation may have already taken place. However, I may be misunderstanding the situation. So I rechecked the installation instructions, and I found a specific statement that says, "You must run and complete successfully mexGPUall.m in the CUDA folder." I ran the mexGPUall.m script, but it appeared that nothing significant happened (as shown below). Is this normal? If not, could you please advise me on the next steps I should take? Thanks!

[zhangdaohan20h@gpu01 ~]$ cd  /home/zhangdaohan20h/Kilosort-kilosort3/CUDA    
[zhangdaohan20h@gpu01 CUDA]$ matlab mexGPUall.m
MATLAB is selecting SOFTWARE OPENGL rendering.

                                               < M A T L A B (R) >
                                     Copyright 1984-2020 The MathWorks, Inc.
                                     R2020a (9.8.0.1323502) 64-bit (glnxa64)
                                                February 25, 2020

Warning: Name is nonexistent or not a directory: /tmp/Editor_vxdxq 

To get started, type doc.
For product information, visit www.mathworks.com.

>> doc
Error using doc (line 49)
DOC is not currently available.

>> 
>> 
Warning: The CUDA driver must recompile the GPU libraries because your device is more recent than the libraries. Recompiling can take several minutes. 

Does it mean recompiling is finishing automatedly?

alejoe91 commented 1 month ago

You need to first open Matlab:

>>> matlab

Then when Matlab opens you have to run the other command:

mexGPUall

DaohanZhang commented 1 month ago

Fine! And here is the error raised by running mexGPUall:

>> mexGPUall
Building with 'nvcc'.
Error using mex
In file included from /share/apps/imaging/MATLAB-2020a/sys/cuda/glnxa64/cuda/include/cuda_runtime.h:83,
                 from <command-line>:
/share/apps/imaging/MATLAB-2020a/sys/cuda/glnxa64/cuda/include/crt/host_config.h:129:2: error: #error -- unsupported GNU
version! gcc versions later than 8 are not supported!
  129 | #error -- unsupported GNU version! gcc versions later than 8 are not supported!
      |  ^~~~~

Error in mexcuda (line 166)
    [varargout{1:nargout}] = mex(mexArguments{:});

Error in mexGPUall (line 7)
    mexcuda -largeArrayDims spikedetector3.cu

>> 

So I ckeck my gcc version. Unluckily it goes far beyond 8.

(kilosort4) [zhangdaohan20h@gpu01 MGH01KS3]$ gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/share/apps/cluster/gcc-10.3/libexec/gcc/x86_64-pc-linux-gnu/10.3.0/lto-wrapper
Target: x86_64-pc-linux-gnu
Configured with: ../gcc-10.3.0/configure --prefix=/share/apps/cluster/gcc-10.3 --disable-multilib --enable-languages=c,c++,fortran
Thread model: posix
Supported LTO compression algorithms: zlib
gcc version 10.3.0 (GCC) 

What should I do next? Maybe KS3 or 2.5 is not applicable unless I degrade my gcc version?

alejoe91 commented 1 month ago

Indeed. Have you tried running it in Docker?

DaohanZhang commented 1 month ago

Oh, I have tried this. But I found something wrong with the dev_mode which stopped me from installing spikeinterface in the container from my own folder. Please see https://github.com/SpikeInterface/spikeinterface/issues/3325