SpikeInterface / spikeinterface

A Python-based module for creating flexible and robust spike sorting pipelines.
https://spikeinterface.readthedocs.io
MIT License
500 stars 187 forks source link

Can't run sorters in parallel in Docker #1911

Open ebitz-lab opened 1 year ago

ebitz-lab commented 1 year ago

Hi When I try to run multiple sorters - 'loop', it fails to generate output. Here is the code

''' import spikeinterface.sorters as ss job_kwargs = dict(n_jobs=-1, progress_bar=True, detect_threshold = 5, detect_sign = -1) sorter_list = ['tridesclous', 'mountainsort4', 'ironclust','herdingspikes']

docker_images = {'tridesclous':True, 'mountainsort4':True, 'ironclust':True,'herdingspikes':True}

sorting_output = ss.run_sorters(sorter_list, {'rec_1':recording_scaled}, working_folder = "sort", sorter_params=job_kwargs, mode_if_folder_exists='overwrite', engine='job', verbose=True, with_output=True, docker_images=docker_images) ''' OUTPUT ''' { "name": "SpikeSortingError", "message": "get result error: the folder does not contain the spikeinterface_log.json file", "stack": "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m\n\u001b[1;31mSpikeSortingError\u001b[0m Traceback (most recent call last)\nCell \u001b[1;32mIn[35], line 7\u001b[0m\n\u001b[0;32m 3\u001b[0m sorter_list \u001b[39m=\u001b[39m [\u001b[39m'\u001b[39m\u001b[39mtridesclous\u001b[39m\u001b[39m'\u001b[39m, \u001b[39m'\u001b[39m\u001b[39mmountainsort4\u001b[39m\u001b[39m'\u001b[39m, \u001b[39m'\u001b[39m\u001b[39mironclust\u001b[39m\u001b[39m'\u001b[39m,\u001b[39m'\u001b[39m\u001b[39mherdingspikes\u001b[39m\u001b[39m'\u001b[39m]\n\u001b[0;32m 5\u001b[0m docker_images \u001b[39m=\u001b[39m {\u001b[39m'\u001b[39m\u001b[39mtridesclous\u001b[39m\u001b[39m'\u001b[39m:\u001b[39mTrue\u001b[39;00m, \u001b[39m'\u001b[39m\u001b[39mmountainsort4\u001b[39m\u001b[39m'\u001b[39m:\u001b[39mTrue\u001b[39;00m, \u001b[39m'\u001b[39m\u001b[39mironclust\u001b[39m\u001b[39m'\u001b[39m:\u001b[39mTrue\u001b[39;00m,\u001b[39m'\u001b[39m\u001b[39mherdingspikes\u001b[39m\u001b[39m'\u001b[39m:\u001b[39mTrue\u001b[39;00m}\n\u001b[1;32m----> 7\u001b[0m sorting_output \u001b[39m=\u001b[39m ss\u001b[39m.\u001b[39;49mrun_sorters(sorter_list, {\u001b[39m'\u001b[39;49m\u001b[39mrec_1\u001b[39;49m\u001b[39m'\u001b[39;49m:recording_scaled}, working_folder \u001b[39m=\u001b[39;49m \u001b[39m\"\u001b[39;49m\u001b[39msort\u001b[39;49m\u001b[39m\"\u001b[39;49m,\n\u001b[0;32m 8\u001b[0m sorter_params\u001b[39m=\u001b[39;49mjob_kwargs, mode_if_folder_exists\u001b[39m=\u001b[39;49m\u001b[39m'\u001b[39;49m\u001b[39moverwrite\u001b[39;49m\u001b[39m'\u001b[39;49m,\n\u001b[0;32m 9\u001b[0m engine\u001b[39m=\u001b[39;49m\u001b[39m'\u001b[39;49m\u001b[39mloop\u001b[39;49m\u001b[39m'\u001b[39;49m, verbose\u001b[39m=\u001b[39;49m\u001b[39mTrue\u001b[39;49;00m, \n\u001b[0;32m 10\u001b[0m with_output\u001b[39m=\u001b[39;49m\u001b[39mTrue\u001b[39;49;00m, docker_images\u001b[39m=\u001b[39;49mdocker_images)\n\nFile \u001b[1;32mc:\Users\ml_dev\anaconda3\envs\ssort2\Lib\site-packages\spikeinterface\sorters\launcher.py:376\u001b[0m, in \u001b[0;36mrun_sorters\u001b[1;34m(sorter_list, recording_dict_or_list, working_folder, sorter_params, mode_if_folder_exists, engine, engine_kwargs, verbose, with_output, docker_images, singularity_images)\u001b[0m\n\u001b[0;32m 369\u001b[0m \u001b[39mprint\u001b[39m(\n\u001b[0;32m 370\u001b[0m \u001b[39mf\u001b[39m\u001b[39m'\u001b[39m\u001b[39mWarning!! With engine=\u001b[39m\u001b[39m\"\u001b[39m\u001b[39m{\u001b[39;00mengine\u001b[39m}\u001b[39;00m\u001b[39m\"\u001b[39m\u001b[39m you cannot have directly output results\u001b[39m\u001b[39m\n\u001b[39;00m\u001b[39m'\u001b[39m\n\u001b[0;32m 371\u001b[0m \u001b[39m\"\u001b[39m\u001b[39mUse : run_sorters(..., with_output=False)\u001b[39m\u001b[39m\n\u001b[39;00m\u001b[39m\"\u001b[39m\n\u001b[0;32m 372\u001b[0m \u001b[39m\"\u001b[39m\u001b[39mAnd then: results = collect_sorting_outputs(output_folders)\u001b[39m\u001b[39m\"\u001b[39m\n\u001b[0;32m 373\u001b[0m )\n\u001b[0;32m 374\u001b[0m \u001b[39mreturn\u001b[39;00m\n\u001b[1;32m--> 376\u001b[0m results \u001b[39m=\u001b[39m collect_sorting_outputs(working_folder)\n\u001b[0;32m 377\u001b[0m \u001b[39mreturn\u001b[39;00m results\n\nFile \u001b[1;32mc:\Users\ml_dev\anaconda3\envs\ssort2\Lib\site-packages\spikeinterface\sorters\launcher.py:439\u001b[0m, in \u001b[0;36mcollect_sorting_outputs\u001b[1;34m(working_folder)\u001b[0m\n\u001b[0;32m 434\u001b[0m \u001b[39m\u001b[39m\u001b[39m\"\"\"Collect results in a working_folder.\u001b[39;00m\n\u001b[0;32m 435\u001b[0m \n\u001b[0;32m 436\u001b[0m \u001b[39mThe output is a dict with double key access results[(rec_name, sorter_name)] of SortingExtractor.\u001b[39;00m\n\u001b[0;32m 437\u001b[0m \u001b[39m\"\"\"\u001b[39;00m\n\u001b[0;32m 438\u001b[0m results \u001b[39m=\u001b[39m {}\n\u001b[1;32m--> 439\u001b[0m \u001b[39mfor\u001b[39;00m rec_name, sorter_name, sorting \u001b[39min\u001b[39;00m iter_sorting_output(working_folder):\n\u001b[0;32m 440\u001b[0m results[(rec_name, sorter_name)] \u001b[39m=\u001b[39m sorting\n\u001b[0;32m 441\u001b[0m \u001b[39mreturn\u001b[39;00m results\n\nFile \u001b[1;32mc:\Users\ml_dev\anaconda3\envs\ssort2\Lib\site-packages\spikeinterface\sorters\launcher.py:429\u001b[0m, in \u001b[0;36miter_sorting_output\u001b[1;34m(working_folder)\u001b[0m\n\u001b[0;32m 427\u001b[0m \u001b[39mfor\u001b[39;00m rec_name, sorter_name, output_folder \u001b[39min\u001b[39;00m iter_working_folder(working_folder):\n\u001b[0;32m 428\u001b[0m SorterClass \u001b[39m=\u001b[39m sorter_dict[sorter_name]\n\u001b[1;32m--> 429\u001b[0m sorting \u001b[39m=\u001b[39m SorterClass\u001b[39m.\u001b[39;49mget_result_from_folder(output_folder)\n\u001b[0;32m 430\u001b[0m \u001b[39myield\u001b[39;00m rec_name, sorter_name, sorting\n\nFile \u001b[1;32mc:\Users\ml_dev\anaconda3\envs\ssort2\Lib\site-packages\spikeinterface\sorters\basesorter.py:283\u001b[0m, in \u001b[0;36mBaseSorter.get_result_from_folder\u001b[1;34m(cls, output_folder)\u001b[0m\n\u001b[0;32m 281\u001b[0m log_file \u001b[39m=\u001b[39m output_folder \u001b[39m/\u001b[39m \u001b[39m\"\u001b[39m\u001b[39mspikeinterface_log.json\u001b[39m\u001b[39m\"\u001b[39m\n\u001b[0;32m 282\u001b[0m \u001b[39mif\u001b[39;00m \u001b[39mnot\u001b[39;00m log_file\u001b[39m.\u001b[39mis_file():\n\u001b[1;32m--> 283\u001b[0m \u001b[39mraise\u001b[39;00m SpikeSortingError(\u001b[39m\"\u001b[39m\u001b[39mget result error: the folder does not contain the spikeinterface_log.json file\u001b[39m\u001b[39m\"\u001b[39m)\n\u001b[0;32m 285\u001b[0m \u001b[39mwith\u001b[39;00m log_file\u001b[39m.\u001b[39mopen(\u001b[39m\"\u001b[39m\u001b[39mr\u001b[39m\u001b[39m\"\u001b[39m, encoding\u001b[39m=\u001b[39m\u001b[39m\"\u001b[39m\u001b[39mutf8\u001b[39m\u001b[39m\"\u001b[39m) \u001b[39mas\u001b[39;00m f:\n\u001b[0;32m 286\u001b[0m log \u001b[39m=\u001b[39m json\u001b[39m.\u001b[39mload(f)\n\n\u001b[1;31mSpikeSortingError\u001b[0m: get result error: the folder does not contain the spikeinterface_log.json file" }

I also can't run in parallel - _ am wondering if the problem is with multi thread - cuda library or it is a bug in spikeinterface.

"name": "FileNotFoundError", "message": "[Errno 2] The system cannot find the file specified: 'C:\\Users\\ml_dev\\Desktop\\Rishabh_spike\\sort\\rec_1\\in_container_recording.json'", "stack": "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m\n\u001b[1;31m_RemoteTraceback\u001b[0m Traceback (most recent call last)\n\u001b[1;31m_RemoteTraceback\u001b[0m: \n\"\"\"\nTraceback (most recent call last):\n File \"c:\Users\ml_dev\anaconda3\envs\ssort2\Lib\site-packages\joblib\externals\loky\process_execu....

alejoe91 commented 1 year ago

Hi @ebitz-lab

We have never tested running in parallel on multiple docker images.. I guess that the error could be due to the docker directly. It's hard to interpret the error messages.

JoeZiminski commented 3 months ago

@alejoe91 do you think its best to try and support this behaviour vs. adding a note to docs that this is not supported?

alejoe91 commented 3 months ago

thinking again about this: I think that the loop option should work well, so docker should not be a problem here..

JoeZiminski commented 3 months ago

🤔 @ebitz-lab are you still having this issue / were you ever able to resolve it?