SpikeInterface / spikeinterface

A Python-based module for creating flexible and robust spike sorting pipelines.
https://spikeinterface.readthedocs.io
MIT License
527 stars 187 forks source link

Memory allocation in run_sorter #1630

Closed timsainb closed 4 months ago

timsainb commented 1 year ago

Hey All,

I'm recently using spikeinterface again after having not used it for about a year.

In the previous version, you would pass the parameters e.g. "n_jobs_bin": and "total_memory" directly to run_sorters.

In the new version, we do something like:

global_job_kwargs = dict(n_jobs=5, total_memory="1000M", progress_bar=True)
set_global_job_kwargs(**global_job_kwargs)

Then I run

sorting = ss.run_kilosort2_5(
    recording,
    output_folder=f'/tmp/{now_string}',
    remove_existing_folder=True,
    verbose=True,
    n_jobs=5,
    **{
        "detect_threshold": 5,  # "Threshold for spike detection",
        "projection_threshold": [9, 4],  # "Threshold on projections",
        "preclust_threshold": 7,  # "Threshold crossings for pre-clustering (in PCA projection space)",
        "car": False,  # "Enable or disable common reference",
        "minFR": 0.1,  # "Minimum spike rate (Hz), if a cluster falls below this for too long it gets removed",
        "minfr_goodchannels": 0.1,  # "Minimum firing rate on a 'good' channel",
        "nblocks": 5,  # "blocks for registration. 0 turns it off, 1 does rigid registration. Replaces 'datashift' option.",
        "sig": 20,  # "spatial smoothness constant for registration",
        "freq_min": 150,  #  "High-pass filter cutoff frequency",
        "sigmaMask": 30,  # "Spatial constant in um for computing residual variance of spike",
        "nPCs": 3,  # "Number of PCA dimensions",
        "ntbuff": 64,  # "Samples of symmetrical buffer for whitening and spike detection",
        "nfilt_factor": 6,  # "Max number of clusters per good channel (even temporary ones) 4",
        "do_correction": True,  # "If True drift registration is applied",
        "NT": 64*1024*64, #None,  # "Batch size (if None it is automatically computed)",
        "wave_length": 61,  # "size of the waveform extracted around each detected peak, (Default 61, maximum 81)",
        "keep_good_only": False,  #  "If True only 'good' units are returned",
    },
)

This doesn't seem to work for me though. For example, with the above parameters, I get the following:

image

Figure: (x-axis is seconds, y axis is RAM usage in MB). I ran two sorts, one with a 1.5hr dataset, the other 3.5 hours. I had an allocation of 20GB ram, and the second sort crashed. The initial jumps in ram usage are spikeinterface writing the data file. Then you see kilosort 2.5 running. Both the writer and the kilosort job use well above the allotted RAM.

I tried bumping the NT batch size down to 8102464 (1/8 the size) with the same larger dataset and I get only a slight reduction in RAM usage:

image (this time it stayed just below the alloted 20GB and finished running.

So my question is, how can I control the RAM utilization better?

alejoe91 commented 1 year ago

Hi @timsainb

Welcome back! Can you try to profile the call to the save() function alone? Not sure this is related to Kilosort at all!

recording = recording.save(folder="some-test-folder", progress_bar=True, verbose=True)
h-mayorquin commented 1 year ago

I think this is related to:

https://github.com/SpikeInterface/spikeinterface/pull/1602

Or at least it should alleviate the writing data issue.

In the current implementation there is a casting that makes a copy of the files doubling the memory usage in every chunk. Could you try the branch above?

@timsainb How are you measuring ram usage? What is your recording (i.e. spikeglx? openephys?)?

zm711 commented 4 months ago

We have a lack of activity for this one of over a year. Did you still want to work on this @h-mayorquin ?

h-mayorquin commented 4 months ago

OP never answered what format they were using or if they were applying pre-processing. The information is insufficient to diagnose the issue. Let's close it, if OP wants to provide the information and is still interested on this he can open it again.