flatironinstitute / mountainsort5

MountainSort spike sorting algorithm, version 5
Apache License 2.0
34 stars 8 forks source link

Memory errors constantly kill my job #35

Open rtraghavan opened 6 months ago

rtraghavan commented 6 months ago

I'm trying to sort a neuropixel array recording with mountainsort 5, and it keeps getting killed. The recording is ~3 hours long. I am using the following parameters. The machine in question has 128 GB of memory. Any suggestions? I have already brought the radii down, the classification chunk sec lower, is there anything I can do to reduce things further? Perhaps reduce block duration even lower?

sorting_params = {} sorting_params["max_num_snippets_per_training_batch"] = 1000 sorting_params["snippet_mask_radius"] = 60 sorting_params["phase1_npca_per_channel"] = 3 sorting_params["phase1_npca_per_subdivision"] = 10 sorting_params["classifier_npca"] = 10 sorting_params["detect_channel_radius"] = 60 sorting_params["phase1_detect_channel_radius"] = 60 sorting_params["training_recording_sampling_mode"] = "uniform" sorting_params["training_duration_sec"] = 60 sorting_params["phase1_detect_threshold"] = 5.5 sorting_params["detect_threshold"] = 5.25 sorting_params["snippet_T1"] = 15 sorting_params["snippet_T2"] = 40 sorting_params["detect_sign"] = 0 sorting_params["phase1_detect_time_radius_msec"] = 0.5 sorting_params["detect_time_radius_msec"] = 0.5 sorting_params["classification_chunk_sec"] = 100

sorting = ms5.sorting_scheme3( recording=ret, sorting_parameters=ms5.Scheme3SortingParameters( block_sorting_parameters=ms5.Scheme2SortingParameters(*sorting_params), block_duration_sec=60 5, ),

The error message is below.

Detected 2594233 spikes MS5 Elapsed time for detect_spikes: 7659.681 seconds Removing duplicate times MS5 Elapsed time for remove_duplicate_times: 0.032 seconds Extracting 1262530 snippets MS5 Elapsed time for extract_snippets: 69.843 seconds Computing PCA features with npca=1152 Killed

jbmelander commented 6 months ago

Try raising the detect thresholds to 7, and dropping the number of snippets to 500.

3 hours is quite long, but at this stage, the memory issues are coming from the fact that you are detecting a huge number of spikes at the classifier training stage.

I have been playing with using scheme 3 for super long recordings as a way to get around memory limits, but as of now, my best advice is to tweak parameters until you can fit it all.

Are you using 1.0 or 2.0 neuropixels?

jbmelander commented 6 months ago

@rtraghavan

A few other thoughts:

        sorting_params = {}                                                         

        sorting_params["max_num_snippets_per_training_batch"] = 500                 
        sorting_params["snippet_mask_radius"] = 30                                  
        sorting_params["phase1_npca_per_channel"] = 3                               
        sorting_params["phase1_npca_per_subdivision"] = 3                           
        sorting_params["classifier_npca"] = 3                                       
        sorting_params["detect_channel_radius"] = 30                                
        sorting_params["phase1_detect_channel_radius"] = 30                         
        sorting_params["training_recording_sampling_mode"] = "uniform"              
        sorting_params["training_duration_sec"] = 150                               
        sorting_params["phase1_detect_threshold"] = 6.5                             
        sorting_params["detect_threshold"] = 6.0                                                                                                                                              
        sorting_params["snippet_T1"] = 15                                           
        sorting_params["snippet_T2"] = 35                                           
        sorting_params["detect_sign"] = -1                                          
        sorting_params["phase1_detect_time_radius_msec"] = 0.5                      
        sorting_params["detect_time_radius_msec"] = 0.5                             
        sorting_params["classification_chunk_sec"] = 100                            
rtraghavan commented 6 months ago

Reducing the training batch size and raising detect thresholds did not work. I'm repeating the process with the parameters you suggested and even smaller batch sizes (1 minute). Though that point, I think it may make more sense to sort with scheme 2 for 30-minute chunks with 15-minute overlap between chunks and then look for overlap, which I did before this latest release.

These are neuropixels 1.0 probes. I'm curious to know what @magland thinks.

jbmelander commented 6 months ago

Let me know if you want to jump on a zoom and talk through some points that we could improve for neuropixel-specific use cases.

Email is melander at stanford dot edu

rtraghavan commented 6 months ago

Luckily, given the changes you suggested, things have improved substantially. I'm waiting for the sort to complete fully to inspect the actual identified units. I'll post the results later.

jbmelander commented 6 months ago

Good to hear! One question - when you mention batch size, do you mean classification_chunk_sec? Or are you referring to something using scheme 3.

magland commented 6 months ago

Sorry for being late to this thread. Thanks @jbmelander for your guidance. It all looks good to me.

Regarding scheme 3, I have only tested it with one dataset thus far. My hope is that it could be improved over time so that it could be useful for 24+ hour recordings.

@jbmelander I agree with you that it can be helpful to restrict to a subset of good channels when that makes sense.

jbmelander commented 6 months ago

@magland @rtraghavan I think there are a few fixes that would make scheme 3 pretty powerful. As it is now, it seems like a great idea and well implemented, but I do get results that somewhat overestimates the number of clusters. If anyone is interested in making a plan for developing scheme 3, I'm in.

magland commented 6 months ago

If anyone is interested in making a plan for developing scheme 3, I'm in.

Sure, do you want to start a new gh issue thread?

jbmelander commented 6 months ago

Yeah, I will do so later today


From: Jeremy Magland @.> Sent: Tuesday, March 12, 2024 8:49 AM To: flatironinstitute/mountainsort5 @.> Cc: Josh Brendan Melander @.>; Mention @.> Subject: Re: [flatironinstitute/mountainsort5] Memory errors constantly kill my job (Issue #35)

If anyone is interested in making a plan for developing scheme 3, I'm in.

Sure, do you want to start a new gh issue thread?

— Reply to this email directly, view it on GitHubhttps://github.com/flatironinstitute/mountainsort5/issues/35#issuecomment-1991979378, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AHXLMBTCLONVKOPOHFWELCTYX4PZDAVCNFSM6AAAAABEOGS7G2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSOJRHE3TSMZXHA. You are receiving this because you were mentioned.Message ID: @.***>

magland commented 6 months ago

Luckily, given the changes you suggested, things have improved substantially. I'm waiting for the sort to complete fully to inspect the actual identified units. I'll post the results later.

@jbmelander @rtraghavan

Looking back at this thread, I'm curious if some of the details as to which parameters were changed may have been in a zoom call. If so, it would be helpful if you could include some of that info here. Thanks.

jbmelander commented 6 months ago

We haven't met yet - but I will make a new issue once we have.