SpikeInterface / spikeinterface

A Python-based module for creating flexible and robust spike sorting pipelines.
https://spikeinterface.readthedocs.io
MIT License
486 stars 187 forks source link

Optimizing `job_kwargs` / providing default `job_kwargs` #2232

Open zm711 opened 9 months ago

zm711 commented 9 months ago

Github Issues

Here is a non-exhaustive list of issues that either directly were related to job_kwargs (n_jobs being the most common issue) or the potential benefit of additional guardrails in spikeinterface. I haven't directly linked any PRs for this section).

1063

2026

2029

2217

2202

1922

1845

Discussion Issues

To keep this issue manageable I'm only including two topics-- how to optimize kwargs and n_jobs specifically.

Optimizing kwargs

It has come up on other occasions (the Cambridge Neurotech talk, for example), where people were unsure how to optimize the kwargs themselves. For example they know they change n_jobs to be a different number, but they don't know how to pick the appropriate number. Or how does chunk_size really affect things. Should the default help with small or big datasets or do I need to set it based on my RAM, etc. Part of this can be explained by documentation, but the fact that people are still asking means either 1) the docs are unclear or 2) that part of the docs is hard to find.

1) Should this be explained better/made more visible in the docs (again move out of core and given its own section) 2) Would it be beneficial to create a job_kwarg optimizer as suggested in one of the issues (#2026) so that these values dynamically change based on the OS/computer

n_jobs

The default for this is n_jobs=-1 which means all available (logical) cores. As we began to discuss in #2218, it might be nice to change this default to something that provides the OS a little breathing room when doing multiprocessing. Heberto pointed out to me that both Intel and AMD do in fact have the logical processing concept (I still need to test my Mac, but I think they do not). I'm not sure if that actually influences this or not. So if we set n_jobs=0.9 as @alejoe91 suggested it should still leave at least one logical processor to do OS tasks so I think it would safer, but maybe it is better to have a whole physical core. That I'm not sure of. Unfortunately os does not provide a way to check logical vs physical cores currently, so it would require the addition of psutil to core in order to be able to check this if the cutoff should be decided based on logical vs physical cores.

progress_bar

This is very small but the tqdm is not working on Windows similar to what was seen in #2122.

samuelgarcia commented 9 months ago

Also something I have in mind since long time:

we almost always do n_jobs= -1 and max_threads_per_process=1. This is not optimal in many case.

Maybe n_jobs=0.25 and max_threads_per_process=4 would be more optimal. For super powerfull machine with 100 cores this leads to 100 process this is super bad for performences. Many we could have a function to get optimal job_kwargs dict depending the machine

Something like : job_kwargs = get_optimal_job_kwargs(proces_thread_ratio=0.25, average_memory_usage=0.25) or even better job_kwargs = get_optimal_job_kwargs(proces_thread_ratio="auto") auto would take in account the number of physical, the total memory, ... or maybe job_kwargs = get_optimal_job_kwargs(mode="many_core_few_io" / "few_core" / "burn_my_machine" )

What do you think ?

zm711 commented 9 months ago

I think that would be great. I currently also need to change my n_jobs because my computer freaks out with -1, so I always just leave one core behind, but if there can be a smart way/auto way to do it that would be perfect. And it would fix both issues (with the added bonus that if the winter is chilly I can just set the optimizer to "burn_my_machine").

As far as process_to_thread, I wouldn't have a clue what type of ratio would make sense (your example has the 1:4, so I would trust whatever you guys thought for that). But we are sometimes having memory errors as mentioned in #1063, so checking total memory would be super beneficial.

I guess #1063, also mentions running out of storage (which is a big problem since I can't currently easily switch drives), but I don't know if checking storage would go here or would need to be a different utility function.

JoeZiminski commented 8 months ago

Just moving some of the conversation from #2029 here, some points that came up:

1) expose and standardise attribute names on preprocessors that determine the datatype used during preprocessing. e.g. phase_shift converts to float64 for the preprocessing step held in a tmp_dtype variable, but this is not accessible from outside so cannot predict memory useage.

2) Some optimisation of memory usage during preprocessing will be needed to accurately predict the memory usage which is not currently a linear function of array dtype and size.

h-mayorquin commented 3 months ago

I am adding these to the discussion as this get_chunk_with_margin appears on a bunch of pipelines and causes memory spikes:

https://github.com/SpikeInterface/spikeinterface/issues/2859

h-mayorquin commented 3 months ago

apply_fshit_sam is also a big culprit for preprocessing blowing up memory:

https://github.com/SpikeInterface/spikeinterface/blob/cec72f40152b4e7d2e36b979b6769bbdb6d91c11/src/spikeinterface/preprocessing/phase_shift.py#L119-L141

samuelgarcia commented 3 months ago

clearly!