Open sushmitaS16 opened 2 months ago
Do you really have 115 cores? This seems like too many processes to spin off, which to my mind could explain part of the problem. Are you on a cluster?
As for your immediate error you should do the following instead;
si.set_global_job_kwargs(**job_kwargs)
We've made some updates in 0.101.0 but I forget which ones made it into 0.100.8.
Hi, for the second error, run_sorter for spykingcircus takes the job_kwargs like this:
run_sorter("spykingcircus2", recording=recording, job_kwargs={"n_jobs":3})
As a parameter and not as **kwargs .
But the real quesiton is why you processing is stalling.
Could you give us more information, how many cores and ram you have in your machine, what is your recording format and what pre-processing you are using?
Hello, @zm711 Yes, I am running this code in a cluster with 144 cores. I tried the way you mentioned but still getting the same error.
@h-mayorquin Thanks for the syntax correction, I'll try that out. Following are the details that you requested:
Thanks
Let us know how it goes. Is the thing still hanging?
Hello, I am using Spike Interface to analyze open-ephys data and it seems to get stuck at the run_sorter() command.
Exact situation: this command was running for 1 whole week with no further progress!
I tried the UserWarning mentioned to parallelize my task, but that too gives an attribute error:
My system specs are as follows: Debian GNU/Linux 11 (bullseye) 5.10.0-29-amd64
Also, I am using python v3.9 and spike interface v0.100.8
I would highly appreciate if anyone could help me figure this out! Thanks