Open kshtjkumar opened 6 months ago
@kshtjkumar, which version are you using. This was an error for Windows computer that was fixed previously. Could you post the exact paths you're using? (Feel free to put ... if there are parts you want to edit for privacy). E.g.
recording = se.read_xx(r'E:\Users\...\experiment1\...\file.xx')
current version of this notebook is : 0.99.1
I would update that to at least 0.100.x I believe the Windows fix was around that time. Otherwise you have to sort on the same drive where the data is. So you could test E: -> E:
file2 ="E:\CEAF_PC_BACKUP\monica_EEG_GSstudy\L37_1_19_1_24_240120_102532\L37_1_19_1_24_240120_102532_merged.rhs"
#reader = se.read_intan(file,stream_name ='RHD2000 amplifier channel',use_names_as_ids=True)
reader2 = se.read_intan(file2,stream_name ='RHD2000 amplifier channel',use_names_as_ids=True)
recording_rhs = reader2 #recording file
print(reader2.channel_ids)
recording_rhs.annotate(is_filtered = False)
channel_ids = recording_rhs.get_channel_ids()
fs = recording_rhs.get_sampling_frequency()
num_chan = recording_rhs.get_num_channels()
num_segments = recording_rhs.get_num_segments()
print("Channel_ids = ", channel_ids)
print("Sampling_frequency = ", fs)
print("Number of Channels = ", num_chan)
print("Number of segments = ", num_segments)
print('Total_rec_duration = ', recording_rhs.get_total_duration())
#ecog = ['D-000','D-002', 'D-004', 'D-006']
ecog = ['B-000', 'B-002', 'B-004', 'B-006']#,'D-000','D-002', 'D-004', 'D-006']
recording_rhs = recording_rhs.channel_slice(ecog) #Using only specific channels for recording
I would update that to at least 0.100.x I believe the Windows fix was around that time. Otherwise you have to sort on the same drive where the data is. So you could test E: -> E:
I dont mind updating but then I will have to change a lot of syntax and waveform extraction also for the newer version.
For version 0.100.x you won't have to change anything. syntax changes with version 0.101.0.
file2 ="E:\CEAF_PC_BACKUP\monica_EEG_GSstudy\L37_1_19_1_24_240120_102532\L37_1_19_1_24_240120_102532_merged.rhs" #reader = se.read_intan(file,stream_name ='RHD2000 amplifier channel',use_names_as_ids=True) reader2 = se.read_intan(file2,stream_name ='RHD2000 amplifier channel',use_names_as_ids=True) recording_rhs = reader2 #recording file print(reader2.channel_ids) recording_rhs.annotate(is_filtered = False) channel_ids = recording_rhs.get_channel_ids() fs = recording_rhs.get_sampling_frequency() num_chan = recording_rhs.get_num_channels() num_segments = recording_rhs.get_num_segments() print("Channel_ids = ", channel_ids) print("Sampling_frequency = ", fs) print("Number of Channels = ", num_chan) print("Number of segments = ", num_segments) print('Total_rec_duration = ', recording_rhs.get_total_duration()) #ecog = ['D-000','D-002', 'D-004', 'D-006'] ecog = ['B-000', 'B-002', 'B-004', 'B-006']#,'D-000','D-002', 'D-004', 'D-006'] recording_rhs = recording_rhs.channel_slice(ecog) #Using only specific channels for recording
Hi here is my reader path, can you help with the changes ?
My only recommendation for the path is that you should use a raw string instead. Windows paths can escape so using a raw string will protect you.
file2 =r"E:\CEAF_PC_BACKUP\monica_EEG_GSstudy\L37_1_19_1_24_240120_102532\L37_1_19_1_24_240120_102532_merged.rhs"
notice the r
for raw. This prevents any accidental escaping.
Also as an FYI we've updated intan for neo 13.1 so now there is a difference between RHD and RHS amplifier stream. So that code may break with a future version of neo. Just in case you update.
My only recommendation for the path is that you should use a raw string instead. Windows paths can escape so using a raw string will protect you.
file2 =r"E:\CEAF_PC_BACKUP\monica_EEG_GSstudy\L37_1_19_1_24_240120_102532\L37_1_19_1_24_240120_102532_merged.rhs"
notice the
r
for raw. This prevents any accidental escaping.
tried this it didnt work, same mounting error.
After updating to spikeinterface 0.100.x?
After updating to spikeinterface 0.100.x?
updated to 0.100.1, here is the error:
Exception in thread Thread-5:
Traceback (most recent call last):
File "C:\Users\garim\.conda\envs\spike\lib\threading.py", line 1016, in
_bootstrap_inner
self.run()
File
"C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\process.py",
line 323, in run
self.terminate_broken(cause)
File
"C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\process.py",
line 463, in terminate_broken
work_item.future.set_exception(bpe)
File "C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\_base.py",
line 561, in set_exception
raise InvalidStateError('{}: {!r}'.format(self._state, self))
concurrent.futures._base.InvalidStateError: CANCELLED: <Future at
0x292d9d21f90 state=cancelled>
---------------------------------------------------------------------------
SpikeSortingError Traceback (most recent call last)
Cell In[4], line 6
4 rec_ecog_ref = spre.common_reference(recording_notch_ecog,
operator="median", reference="global") #rereferencing the data
5 output_folder = Path(r"C:\Users\garim\mountainsort5_output78558599")
----> 6 sorting_rec = ss.run_sorter("mountainsort5", rec_ecog_ref,
output_folder=output_folder)
7 print("Sorter found", len(sorting_rec.get_unit_ids()), "units")
8 sorting_rec = sorting_rec.remove_empty_units()
File
~\.conda\envs\spike\lib\site-packages\spikeinterface\sorters\runsorter.py:175,
in run_sorter(sorter_name, recording, output_folder,
remove_existing_folder, delete_output_folder, verbose, raise_error,
docker_image, singularity_image, delete_container_files, with_output,
**sorter_params)
168 container_image = singularity_image
169 return run_sorter_container(
170 container_image=container_image,
171 mode=mode,
172 **common_kwargs,
173 )
--> 175 return run_sorter_local(**common_kwargs)
File
~\.conda\envs\spike\lib\site-packages\spikeinterface\sorters\runsorter.py:225,
in run_sorter_local(sorter_name, recording, output_folder,
remove_existing_folder, delete_output_folder, verbose, raise_error,
with_output, **sorter_params)
223 SorterClass.set_params_to_folder(recording, output_folder,
sorter_params, verbose)
224 SorterClass.setup_recording(recording, output_folder,
verbose=verbose)
--> 225 SorterClass.run_from_folder(output_folder, raise_error, verbose)
226 if with_output:
227 sorting = SorterClass.get_result_from_folder(output_folder,
register_recording=True, sorting_info=True)
File
~\.conda\envs\spike\lib\site-packages\spikeinterface\sorters\basesorter.py:293,
in BaseSorter.run_from_folder(cls, output_folder, raise_error, verbose)
290 print(f"{sorter_name} run time {run_time:0.2f}s")
292 if has_error and raise_error:
--> 293 raise SpikeSortingError(
294 f"Spike sorting error trace:\n{log['error_trace']}\n"
295 f"Spike sorting failed. You can inspect the runtime trace
in {output_folder}/spikeinterface_log.json."
296 )
298 return run_time
SpikeSortingError: Spike sorting error trace:
Traceback (most recent call last):
File
"C:\Users\garim\.conda\envs\spike\lib\site-packages\spikeinterface\sorters\basesorter.py",
line 258, in run_from_folder
SorterClass._run_from_folder(sorter_output_folder, sorter_params,
verbose)
File
"C:\Users\garim\.conda\envs\spike\lib\site-packages\spikeinterface\sorters\external\mountainsort5.py",
line 191, in _run_from_folder
recording_cached = create_cached_recording(
File
"C:\Users\garim\.conda\envs\spike\lib\site-packages\mountainsort5\util\create_cached_recording.py",
line 18, in create_cached_recording
si.BinaryRecordingExtractor.write_recording(
File
"C:\Users\garim\.conda\envs\spike\lib\site-packages\spikeinterface\core\binaryrecordingextractor.py",
line 148, in write_recording
write_binary_recording(recording, file_paths=file_paths, dtype=dtype,
**job_kwargs)
File
"C:\Users\garim\.conda\envs\spike\lib\site-packages\spikeinterface\core\recording_tools.py",
line 137, in write_binary_recording
executor.run()
File
"C:\Users\garim\.conda\envs\spike\lib\site-packages\spikeinterface\core\job_tools.py",
line 401, in run
for res in results:
File
"C:\Users\garim\.conda\envs\spike\lib\site-packages\tqdm\notebook.py",
line 250, in __iter__
for obj in it:
File "C:\Users\garim\.conda\envs\spike\lib\site-packages\tqdm\std.py",
line 1181, in __iter__
for obj in iterable:
File
"C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\process.py",
line 575, in _chain_from_iterable_of_lists
for element in iterable:
File "C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\_base.py",
line 621, in result_iterator
yield _result_or_cancel(fs.pop())
File "C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\_base.py",
line 319, in _result_or_cancel
return fut.result(timeout)
File "C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\_base.py",
line 458, in result
return self.__get_result()
File "C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\_base.py",
line 403, in __get_result
raise self._exception
concurrent.futures.process.BrokenProcessPool: A process in the process
pool was terminated abruptly while the future was running or pending.
Spike sorting failed. You can inspect the runtime trace in
C:\Users\garim\mountainsort5_output78558599/spikeinterface_log.json.
We fixed the drive error! So that's one down. So this is a multiprocessing error.
What did you set n_jobs
to?
And did you download the most recent 0.100.6 for example?
not sure what is n_jobs, I can try 0.100.6 too!
If you haven't messed with it then it should default to 1 except in run_sorter
where it defaults to all cores which could be a problem. Try 0.100.6 and if it doesn't work then we may need to strategize a little bit. Windows are always a bit tricky to get working for these things, but we will do our best to figure this out!
this is after 0.100.6
Exception in thread Thread-7:
Traceback (most recent call last):
File "C:\Users\garim\.conda\envs\spike\lib\threading.py", line 1016, in
_bootstrap_inner
self.run()
File
"C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\process.py",
line 323, in run
self.terminate_broken(cause)
File
"C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\process.py",
line 463, in terminate_broken
work_item.future.set_exception(bpe)
File "C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\_base.py",
line 561, in set_exception
raise InvalidStateError('{}: {!r}'.format(self._state, self))
concurrent.futures._base.InvalidStateError: CANCELLED: <Future at
0x292dc8e4580 state=cancelled>
---------------------------------------------------------------------------
SpikeSortingError Traceback (most recent call last)
Cell In[9], line 6
4 rec_ecog_ref = spre.common_reference(recording_notch_ecog,
operator="median", reference="global") #rereferencing the data
5 output_folder = Path(r"C:\Users\garim\mountainsort5_output785585959")
----> 6 sorting_rec = ss.run_sorter("mountainsort5", rec_ecog_ref,
output_folder=output_folder)
7 print("Sorter found", len(sorting_rec.get_unit_ids()), "units")
8 sorting_rec = sorting_rec.remove_empty_units()
File
~\.conda\envs\spike\lib\site-packages\spikeinterface\sorters\runsorter.py:175,
in run_sorter(sorter_name, recording, output_folder,
remove_existing_folder, delete_output_folder, verbose, raise_error,
docker_image, singularity_image, delete_container_files, with_output,
**sorter_params)
168 container_image = singularity_image
169 return run_sorter_container(
170 container_image=container_image,
171 mode=mode,
172 **common_kwargs,
173 )
--> 175 return run_sorter_local(**common_kwargs)
File
~\.conda\envs\spike\lib\site-packages\spikeinterface\sorters\runsorter.py:225,
in run_sorter_local(sorter_name, recording, output_folder,
remove_existing_folder, delete_output_folder, verbose, raise_error,
with_output, **sorter_params)
223 SorterClass.set_params_to_folder(recording, output_folder,
sorter_params, verbose)
224 SorterClass.setup_recording(recording, output_folder,
verbose=verbose)
--> 225 SorterClass.run_from_folder(output_folder, raise_error, verbose)
226 if with_output:
227 sorting = SorterClass.get_result_from_folder(output_folder,
register_recording=True, sorting_info=True)
File
~\.conda\envs\spike\lib\site-packages\spikeinterface\sorters\basesorter.py:293,
in BaseSorter.run_from_folder(cls, output_folder, raise_error, verbose)
290 print(f"{sorter_name} run time {run_time:0.2f}s")
292 if has_error and raise_error:
--> 293 raise SpikeSortingError(
294 f"Spike sorting error trace:\n{log['error_trace']}\n"
295 f"Spike sorting failed. You can inspect the runtime trace
in {output_folder}/spikeinterface_log.json."
296 )
298 return run_time
SpikeSortingError: Spike sorting error trace:
Traceback (most recent call last):
File
"C:\Users\garim\.conda\envs\spike\lib\site-packages\spikeinterface\sorters\basesorter.py",
line 258, in run_from_folder
SorterClass._run_from_folder(sorter_output_folder, sorter_params,
verbose)
File
"C:\Users\garim\.conda\envs\spike\lib\site-packages\spikeinterface\sorters\external\mountainsort5.py",
line 191, in _run_from_folder
recording_cached = recording
File
"C:\Users\garim\.conda\envs\spike\lib\site-packages\mountainsort5\util\create_cached_recording.py",
line 18, in create_cached_recording
si.BinaryRecordingExtractor.write_recording(
File
"C:\Users\garim\.conda\envs\spike\lib\site-packages\spikeinterface\core\binaryrecordingextractor.py",
line 148, in write_recording
write_binary_recording(recording, file_paths=file_paths, dtype=dtype,
**job_kwargs)
File
"C:\Users\garim\.conda\envs\spike\lib\site-packages\spikeinterface\core\recording_tools.py",
line 137, in write_binary_recording
executor.run()
File
"C:\Users\garim\.conda\envs\spike\lib\site-packages\spikeinterface\core\job_tools.py",
line 401, in run
for res in results:
File
"C:\Users\garim\.conda\envs\spike\lib\site-packages\tqdm\notebook.py",
line 250, in __iter__
for obj in it:
File "C:\Users\garim\.conda\envs\spike\lib\site-packages\tqdm\std.py",
line 1181, in __iter__
for obj in iterable:
File
"C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\process.py",
line 575, in _chain_from_iterable_of_lists
for element in iterable:
File "C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\_base.py",
line 621, in result_iterator
yield _result_or_cancel(fs.pop())
File "C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\_base.py",
line 319, in _result_or_cancel
return fut.result(timeout)
File "C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\_base.py",
line 458, in result
return self.__get_result()
File "C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\_base.py",
line 403, in __get_result
raise self._exception
concurrent.futures.process.BrokenProcessPool: A process in the process
pool was terminated abruptly while the future was running or pending.
Spike sorting failed. You can inspect the runtime trace in
C:\Users\garim\mountainsort5_output785585959/spikeinterface_log.json.
Does anything appear in your terminal? When mountsort5 runs it usually starts printing stuff. Do you get to that point or does it break earlier?
Does anything appear in your terminal? When mountsort5 runs it usually starts printing stuff. Do you get to that point or does it break earlier?
Okay cool, it is failing at the write_binary. Could you try:
ss.run_sorter("mountainsort5", rec_ecog_ref, output_folder=output_folder, n_jobs=1)
And see what it does?
This might be pretty slow but it will help us diagnose things!
it did work, but it is ver very slow!
Cool. Okay so the problem is in the multiprocessing. Could you try setting n_jobs=2
or n_jobs=3
. We need to check if the problem is that you were defaulting to too many jobs or if all multiprocessing is broken in your setup.
so basically last time also when i tried the n_jobs argument it gave this error, previous run was executed without mentioning the n_jobs, so by default it took that as 1. here is the error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[22], line 7
4 rec_ecog_ref = spre.common_reference(recording_notch_ecog,
operator="median", reference="global") #rereferencing the data
6 output_folder =
Path(r"C:\Users\garim\mountainsort5_output78558588877559")
----> 7 sorting_rec = ss.run_sorter("mountainsort5", rec_ecog_ref,
output_folder=output_folder, n_jobs = 2)
8 print("Sorter found", len(sorting_rec.get_unit_ids()), "units")
9 sorting_rec = sorting_rec.remove_empty_units()
File
~\.conda\envs\spike\lib\site-packages\spikeinterface\sorters\runsorter.py:175,
in run_sorter(sorter_name, recording, output_folder,
remove_existing_folder, delete_output_folder, verbose, raise_error,
docker_image, singularity_image, delete_container_files, with_output,
**sorter_params)
168 container_image = singularity_image
169 return run_sorter_container(
170 container_image=container_image,
171 mode=mode,
172 **common_kwargs,
173 )
--> 175 return run_sorter_local(**common_kwargs)
File
~\.conda\envs\spike\lib\site-packages\spikeinterface\sorters\runsorter.py:223,
in run_sorter_local(sorter_name, recording, output_folder,
remove_existing_folder, delete_output_folder, verbose, raise_error,
with_output, **sorter_params)
221 # only classmethod call not instance (stateless at instance level
but state is in folder)
222 output_folder = SorterClass.initialize_folder(recording,
output_folder, verbose, remove_existing_folder)
--> 223 SorterClass.set_params_to_folder(recording, output_folder,
sorter_params, verbose)
224 SorterClass.setup_recording(recording, output_folder,
verbose=verbose)
225 SorterClass.run_from_folder(output_folder, raise_error, verbose)
File
~\.conda\envs\spike\lib\site-packages\spikeinterface\sorters\basesorter.py:180,
in BaseSorter.set_params_to_folder(cls, recording, output_folder,
new_params, verbose)
178 bad_params.append(p)
179 if len(bad_params) > 0:
--> 180 raise AttributeError("Bad parameters: " + str(bad_params))
182 params.update(new_params)
184 # custom check params
AttributeError: Bad parameters: ['n_jobs']
Sorry then what did you change when it worked and when it didn't work?
i just removed the n_jobs argument.
I mean from when it failed due to the broken process and it actually working. It just randomly worked or you made a specific change?
no after updating to 0.100.6 , i ran this command :
ss.run_sorter("mountainsort5", rec_ecog_ref, output_folder=output_folder, n_jobs=1)
but it gave the error :
AttributeError: Bad parameters: ['n_jobs']
So i ran this command:
ss.run_sorter("mountainsort5", rec_ecog_ref, output_folder=output_folder)
this is the one which is very slow.
If you type
si.get_global_job_kwargs
What does it say? Because it was saying that it wasn't letting you change the n_jobs
in the run_sorter
function. Could you try
ss.run_sorter("mountainsort5", rec_ecog_ref, output_folder=output_folder, verbose=True)
If you type
si.get_global_job_kwargs
What does it say? Because it was saying that it wasn't letting you change the
n_jobs
in therun_sorter
function. Could you tryss.run_sorter("mountainsort5", rec_ecog_ref, output_folder=output_folder, verbose=True)
si.get_global_job_kwargs() {'n_jobs': 1, 'chunk_duration': '1s', 'progress_bar': True, 'mp_context': None, 'max_threads_per_process': 1}
si.get_global_job_kwargs <function spikeinterface.core.globals.get_global_job_kwargs()>
Sorry that was my typo you were right to type: si.get_global_job_kwargs()
I'm still trying to figure out what if causing the multiprocessing to fail sometimes.
sure!
I am loading the file from hard disk (drive E) and running the extractor, is there something wrong with my code in assigning the path for the sorter ?