flatironinstitute / mountainsort5

MountainSort spike sorting algorithm, version 5
Apache License 2.0
37 stars 8 forks source link

How to limit number of cores to use? #5

Closed danielpollak closed 1 year ago

danielpollak commented 1 year ago

I am using this repo on a strong computer with 64 cores, which ends up going really slowly because they all have to wait for each other, leading to >12 sorting times for a 45 minute NPIX recording. Is there a way to limit the number of cores used?

danielpollak commented 1 year ago

here's the code I'm using:

import spikeinterface.sorters as ss
import spikeinterface.extractors as se
import spikeinterface as si
import spikeinterface.preprocessing as spre
import spikeinterface.widgets as sw
from spikeinterface.sorters import run_sorter
from spikeinterface import WaveformExtractor, extract_waveforms
from spikeinterface.postprocessing import compute_spike_amplitudes, compute_principal_components, compute_correlograms
from spikeinterface.qualitymetrics import compute_quality_metrics
from spikeinterface.exporters import export_report, export_to_phy

import mountainsort5 as ms5
from scipy.io import loadmat
import numpy as np
import pandas as pd

path = 'path/to/continuous.dat'
recording = se.BinaryRecordingExtractor(path, 30_000, 384, 'int16')

recording_filtered = spre.bandpass_filter(recording, freq_min=300, freq_max=6000)
recording_preprocessed : si.BaseRecording = spre.whiten(recording_filtered, dtype='float32')

ChanMap = loadmat('/home/user/git/Kilosort-2.5/configFiles/neuropixPhase3B2_kilosortChanMap.mat')
del ChanMap['__header__'], ChanMap['__version__']
del ChanMap['__globals__'], ChanMap['name'], ChanMap['chanMap'], ChanMap['connected']
probe_df = pd.DataFrame({key:val.flatten() for key, val in ChanMap.items()})
probe_df.columns=['ch', 'shankInd', 'xcoords', 'ycoords']

# %%
recording_preprocessed.set_channel_locations(probe_df[['xcoords', 'ycoords']].values)
recording_bad_removed = recording_preprocessed.channel_slice(np.arange(1, 384, 2))

# use scheme 3
sorting = ms5.sorting_scheme3(
    recording=recording_bad_removed,
    sorting_parameters=ms5.Scheme3SortingParameters(
        block_sorting_parameters=ms5.Scheme2SortingParameters(
            phase1_detect_channel_radius=200,
            detect_channel_radius=50,
        ),
        block_duration_sec=60*1
    )
)
magland commented 1 year ago

Could you please explain what you mean by "which ends up going really slowly because they all have to wait for each other"

danielpollak commented 1 year ago

Sorry, I shouldn't speak to something I don't fully understand. I should say instead that I ran similar sorting tasks on two computers, one with 64 cores and the other on a computer with 8 cores and the one with 8 cores took two hours while the larger computer took >12 hr, so I was wondering if it is possible to specify the number of cores used

magland commented 1 year ago

I don't think the larger core count could result in a slower processing. I would suspect that something else is different in the two setups, or the recordings are different (maybe one has more spikes).

(Right now there is no way to specify the number of cores used.)

danielpollak commented 1 year ago

I see. Thank you for your help!