Open VValentina86 opened 2 days ago
@VValentina86 Please start by uploading kilosort4.log
from the results directory.
As for changing the data dtype, you should only do that if your data has a different dtype than the default (int16). Since you're recording with Neuropixels, that should only be the case if you're doing some of your own preprocessing prior to running Kilosort4 that would change the dtype.
Describe the issue:
Hi, I have a 2 hr recording of 384 channels (1 probe, ~200GB file size). I'm trying to run
run_kilosort
with this file (usingDEFAULT_SETTINGS
). During the "Computing drift correction" step ("Detecting spikes" sub-step), the (CPU) RAM usage grows steadily up to ~120 GB at 40% progress in the routine. Can you please help me understand if there's a memory link or why so much is being held in memory and not released?I quickly tried to call
run_kilosort
withclear_cache=True
, but the high RAM usage persisted. Even a smaller recording (~20 GB) used ~80 GB RAM, so the file size doesn't seem to be the issue.I've seen this comment in #766, but unless
run_kilosort
is callingspikeinterface
internally, I'm not usingspikeinterface
. Based on this other comment in that thread, I'm happy to change the datadtype
if that prevents high memory usage, though I'm not sure where to change that when usingrun_kilosort
.(Tagging my collaborator: @jeffjennings)
Reproduce the bug:
My full script is:
Error message:
No response
Version information:
Python 3.11.10 Kilosort 4.0.20 (using CUDA) Windows 10 (GPU: Nvidia RTX 3090) CUDA 11.8.89