Open ghost opened 5 years ago
Hi Olena. It's not immediately clear what the issue is here (@rwblair Does this ring any bells for you?), but if you know that your dataset is valid BIDS, you can pass --skip-bids-validator
to skip this step.
Warning: <8GB of RAM is available within your Docker environment.
Maybe the default memory settings of Docker are too low?
Thank you for your suggestions. It seems that the original problem was due to the fact that I used Docker-Toolbox instead of Docker, which is not compatible with OS X Yosemite. When I switched to a machine running on OS X High Sierra with 16-Gb RAM and installed Docker for Mac, fMRIPrep started running just fine. However, about 2 hours later, it froze with the following error message:
[Node] Finished "fmriprep_wf.single_subject_MOE345_wf.func_preproc_task_rest_wf.fmap_unwarp_report_wf.bold_rpt".
exception calling callback for <Future at 0x7fe2aa3d3cc0 state=finished raised BrokenProcessPool> Traceback (most recent call last): File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 324, in _invoke_callbacks callback(self) File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 151, in _async_callback result = args.result() File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 425, in result return self.get_result() File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 384, in get_result raise self._exception concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending. exception calling callback for <Future at 0x7fe2aa3245f8 state=finished raised BrokenProcessPool> Traceback (most recent call last): File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 324, in _invoke_callbacks callback(self) File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 151, in _async_callback result = args.result() File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 425, in result return self.get_result() File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 384, in get_result raise self._exception File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 324, in _invoke_callbacks callback(self) File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 151, in _async_callback result = args.result() File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 425, in result return self.get_result() File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 384, in get_result raise self._exception concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending. exception calling callback for <Future at 0x7fe2aa294dd8 state=finished raised BrokenProcessPool> Traceback (most recent call last): File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 324, in _invoke_callbacks callback(self) File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 151, in _async_callback result = args.result() File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 425, in result return self.get_result() File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 384, in get_result raise self._exception File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 324, in _invoke_callbacks callback(self) File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 151, in _async_callback result = args.result() File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 425, in result return self.get_result() File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 384, in get_result raise self._exception File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 324, in _invoke_callbacks callback(self) File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 151, in _async_callback result = args.result() File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 425, in result return self.get_result() File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 384, in get_result raise self._exception concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
I attached the full error log. fmriprep error log MOE.txt
Here are the outputs of fMRIPrep that I end up with: MOE_out └── fmriprep ├── dataset_description.json ├── logs │ ├── CITATION.bib │ ├── CITATION.html │ ├── CITATION.md │ └── CITATION.tex ├── sub-MOE345 │ ├── anat │ │ ├── sub-MOE345_desc-brain_mask.json │ │ ├── sub-MOE345_desc-brain_mask.nii.gz │ │ ├── sub-MOE345_desc-preproc_T1w.json │ │ ├── sub-MOE345_desc-preproc_T1w.nii.gz │ │ ├── sub-MOE345_dseg.nii.gz │ │ ├── sub-MOE345_from-MNI152NLin2009cAsym_to-T1w_mode-image_xfm.h5 │ │ ├── sub-MOE345_from-T1w_to-MNI152NLin2009cAsym_mode-image_xfm.h5 │ │ ├── sub-MOE345_from-orig_to-T1w_mode-image_xfm.txt │ │ ├── sub-MOE345_label-CSF_probseg.nii.gz │ │ ├── sub-MOE345_label-GM_probseg.nii.gz │ │ ├── sub-MOE345_label-WM_probseg.nii.gz │ │ ├── sub-MOE345_space-MNI152NLin2009cAsym_desc-brain_mask.json │ │ ├── sub-MOE345_space-MNI152NLin2009cAsym_desc-brain_mask.nii.gz │ │ ├── sub-MOE345_space-MNI152NLin2009cAsym_desc-preproc_T1w.json │ │ ├── sub-MOE345_space-MNI152NLin2009cAsym_desc-preproc_T1w.nii.gz │ │ ├── sub-MOE345_space-MNI152NLin2009cAsym_dseg.nii.gz │ │ ├── sub-MOE345_space-MNI152NLin2009cAsym_label-CSF_probseg.nii.gz │ │ ├── sub-MOE345_space-MNI152NLin2009cAsym_label-GM_probseg.nii.gz │ │ └── sub-MOE345_space-MNI152NLin2009cAsym_label-WM_probseg.nii.gz │ ├── figures │ │ ├── sub-MOE345_seg_brainmask.svg │ │ ├── sub-MOE345_t1_2_mni.svg │ │ ├── sub-MOE345_task-rest_flirtbbr.svg │ │ └── sub-MOE345_task-rest_sdc_syn.svg │ └── func │ ├── sub-MOE345_task-rest_space-MNI152NLin2009cAsym_desc-brain_mask.json │ └── sub-MOE345_task-rest_space-MNI152NLin2009cAsym_desc-brain_mask.nii.gz └── sub-MOE345.html
6 directories, 31 files
I'd greatly appreciate any help with figuring out what the problem is!
Thank you so much! Olena
This is a memory error. Try rerunning with the same working directory, and you'll often be able to resume without hitting the same memory bottleneck.
Thanks for your response, Chris! I re-ran the code as you suggested, and based on the outputs and the html log file, it seems that fMRIPrep was able to finish successfully. However, I did get the following error:
[Node] Finished "fmriprep_wf.single_subject_MOE345_wf.func_preproc_task_rest_wf.bold_mni_trans_wf.bold_reference_wf.enhance_and_skullstrip_bold_wf.apply_mask". Error in atexit._run_exitfuncs: Traceback (most recent call last): File "/usr/local/miniconda/lib/python3.7/concurrent/futures/process.py", line 101, in _python_exit thread_wakeup.wakeup() File "/usr/local/miniconda/lib/python3.7/concurrent/futures/process.py", line 89, in wakeup self._writer.send_bytes(b"") File "/usr/local/miniconda/lib/python3.7/multiprocessing/connection.py", line 183, in send_bytes self._check_closed() File "/usr/local/miniconda/lib/python3.7/multiprocessing/connection.py", line 136, in _check_closed raise OSError("handle is closed") OSError: handle is closed Sentry is attempting to send 0 pending error messages Waiting up to 2.0 seconds Press Ctrl-C to quit
Is this something I should be concerned about?
Also, these are the outputs of fmriprep: MOE_out └── fmriprep ├── dataset_description.json ├── logs │ ├── CITATION.bib │ ├── CITATION.html │ ├── CITATION.md │ └── CITATION.tex ├── sub-MOE345 │ ├── anat │ │ ├── sub-MOE345_desc-brain_mask.json │ │ ├── sub-MOE345_desc-brain_mask.nii.gz │ │ ├── sub-MOE345_desc-preproc_T1w.json │ │ ├── sub-MOE345_desc-preproc_T1w.nii.gz │ │ ├── sub-MOE345_dseg.nii.gz │ │ ├── sub-MOE345_from-MNI152NLin2009cAsym_to-T1w_mode-image_xfm.h5 │ │ ├── sub-MOE345_from-T1w_to-MNI152NLin2009cAsym_mode-image_xfm.h5 │ │ ├── sub-MOE345_from-orig_to-T1w_mode-image_xfm.txt │ │ ├── sub-MOE345_label-CSF_probseg.nii.gz │ │ ├── sub-MOE345_label-GM_probseg.nii.gz │ │ ├── sub-MOE345_label-WM_probseg.nii.gz │ │ ├── sub-MOE345_space-MNI152NLin2009cAsym_desc-brain_mask.json │ │ ├── sub-MOE345_space-MNI152NLin2009cAsym_desc-brain_mask.nii.gz │ │ ├── sub-MOE345_space-MNI152NLin2009cAsym_desc-preproc_T1w.json │ │ ├── sub-MOE345_space-MNI152NLin2009cAsym_desc-preproc_T1w.nii.gz │ │ ├── sub-MOE345_space-MNI152NLin2009cAsym_dseg.nii.gz │ │ ├── sub-MOE345_space-MNI152NLin2009cAsym_label-CSF_probseg.nii.gz │ │ ├── sub-MOE345_space-MNI152NLin2009cAsym_label-GM_probseg.nii.gz │ │ └── sub-MOE345_space-MNI152NLin2009cAsym_label-WM_probseg.nii.gz │ ├── figures │ │ ├── sub-MOE345_seg_brainmask.svg │ │ ├── sub-MOE345_t1_2_mni.svg │ │ ├── sub-MOE345_task-rest_carpetplot.svg │ │ ├── sub-MOE345_task-rest_flirtbbr.svg │ │ ├── sub-MOE345_task-rest_rois.svg │ │ └── sub-MOE345_task-rest_sdc_syn.svg │ └── func │ ├── sub-MOE345_task-rest_desc-confounds_regressors.tsv │ ├── sub-MOE345_task-rest_space-MNI152NLin2009cAsym_boldref.nii.gz │ ├── sub-MOE345_task-rest_space-MNI152NLin2009cAsym_desc-brain_mask.json │ ├── sub-MOE345_task-rest_space-MNI152NLin2009cAsym_desc-brain_mask.nii.gz │ ├── sub-MOE345_task-rest_space-MNI152NLin2009cAsym_desc-preproc_bold.json │ └── sub-MOE345_task-rest_space-MNI152NLin2009cAsym_desc-preproc_bold.nii.gz └── sub-MOE345.html
6 directories, 37 files
Also, I keep running into the same memory bottlenecks with 16G RAM (all of which docker has access to), so I have to rerun fMRIprep every time it freezes. Is there any way around this problem?
Thanks so much for taking the time to respond! Olena
Dear fMRIPrep gurus,
I am trying to run fMRIPrep for the first time and have encountered a problem with dataset validation. I organized my dataset according to the bids guidelines, and the online bids validator says it is a valid bids dataset.
Here's the output of tree:
However, when I run fmriprep, I get this output:
I am using a Mac (OS X Yosemite) and have both python2.7 and python3.7 installed. At this point, I am really not sure what the problem is. Any help is greatly appreciated!
Thanks! Olena