nipreps / fmriprep

fMRIPrep is a robust and easy-to-use pipeline for preprocessing of diverse fMRI data. The transparent workflow dispenses of manual intervention, thereby ensuring the reproducibility of the results.
https://fmriprep.org
Apache License 2.0
637 stars 295 forks source link

[ERR] The directory /data failed an initial Quick Test. #1580

Open ghost opened 5 years ago

ghost commented 5 years ago

Dear fMRIPrep gurus,

I am trying to run fMRIPrep for the first time and have encountered a problem with dataset validation. I organized my dataset according to the bids guidelines, and the online bids validator says it is a valid bids dataset. image

Here's the output of tree:

Nifti
├── CHANGES
├── README
├── dataset_description.json
├── participants.tsv
├── sub-A00028207
│   └── ses-DS2
│       ├── anat
│       │   └── sub-A00028207_ses-DS2_T1w.nii.gz
│       └── func
│           ├── sub-A00028207_ses-DS2_task-rest_bold.json
│           └── sub-A00028207_ses-DS2_task-rest_bold.nii.gz
└── task-rest_bold.json

4 directories, 8 files

However, when I run fmriprep, I get this output:

fmriprep-docker /Volumes/MyBook/NKI_RS/bids/data/Nifti /Volumes/MyBook/NKI_RS/bids/out participant
Warning: <8GB of RAM is available within your Docker environment.
Some parts of fMRIPrep may fail to complete.
Continue anyway? [y/N]y
RUNNING: docker run --rm -it -e DOCKER_VERSION_8395080871=18.09.3 -v /Volumes/MyBook/NKI_RS/bids/data/Nifti:/data:ro -v /Volumes/MyBook/NKI_RS/bids/out:/out poldracklab/fmriprep:1.3.2 /data /out participant
Making sure the input data is BIDS compliant (warnings can be ignored in most cases).
[ERR]  The directory /data failed an initial Quick Test. This means the basic names and structure of the files and directories do not comply with BIDS specification. For more info go to http://bids.neuroimaging.io/
Traceback (most recent call last):
  File "/usr/local/miniconda/bin/fmriprep", line 11, in <module>
    load_entry_point('fmriprep==1.3.2', 'console_scripts', 'fmriprep')()
  File "/usr/local/miniconda/lib/python3.7/site-packages/fmriprep/cli/run.py", line 358, in main
    validate_input_dir(exec_env, opts.bids_dir, opts.participant_label)
  File "/usr/local/miniconda/lib/python3.7/site-packages/fmriprep/cli/run.py", line 549, in validate_input_dir
    subprocess.check_call(['bids-validator', bids_dir, '-c', temp.name])
  File "/usr/local/miniconda/lib/python3.7/subprocess.py", line 341, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['bids-validator', '/data', '-c', '/tmp/tmp0jkj4h1n']' returned non-zero exit status 1.

I am using a Mac (OS X Yosemite) and have both python2.7 and python3.7 installed. At this point, I am really not sure what the problem is. Any help is greatly appreciated!

Thanks! Olena

effigies commented 5 years ago

Hi Olena. It's not immediately clear what the issue is here (@rwblair Does this ring any bells for you?), but if you know that your dataset is valid BIDS, you can pass --skip-bids-validator to skip this step.

oesteban commented 5 years ago

Warning: <8GB of RAM is available within your Docker environment.

Maybe the default memory settings of Docker are too low?

ghost commented 5 years ago

Thank you for your suggestions. It seems that the original problem was due to the fact that I used Docker-Toolbox instead of Docker, which is not compatible with OS X Yosemite. When I switched to a machine running on OS X High Sierra with 16-Gb RAM and installed Docker for Mac, fMRIPrep started running just fine. However, about 2 hours later, it froze with the following error message:

 [Node] Finished "fmriprep_wf.single_subject_MOE345_wf.func_preproc_task_rest_wf.fmap_unwarp_report_wf.bold_rpt".

exception calling callback for <Future at 0x7fe2aa3d3cc0 state=finished raised BrokenProcessPool> Traceback (most recent call last): File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 324, in _invoke_callbacks callback(self) File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 151, in _async_callback result = args.result() File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 425, in result return self.get_result() File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 384, in get_result raise self._exception concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending. exception calling callback for <Future at 0x7fe2aa3245f8 state=finished raised BrokenProcessPool> Traceback (most recent call last): File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 324, in _invoke_callbacks callback(self) File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 151, in _async_callback result = args.result() File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 425, in result return self.get_result() File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 384, in get_result raise self._exception File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 324, in _invoke_callbacks callback(self) File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 151, in _async_callback result = args.result() File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 425, in result return self.get_result() File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 384, in get_result raise self._exception concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending. exception calling callback for <Future at 0x7fe2aa294dd8 state=finished raised BrokenProcessPool> Traceback (most recent call last): File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 324, in _invoke_callbacks callback(self) File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 151, in _async_callback result = args.result() File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 425, in result return self.get_result() File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 384, in get_result raise self._exception File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 324, in _invoke_callbacks callback(self) File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 151, in _async_callback result = args.result() File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 425, in result return self.get_result() File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 384, in get_result raise self._exception File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 324, in _invoke_callbacks callback(self) File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 151, in _async_callback result = args.result() File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 425, in result return self.get_result() File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 384, in get_result raise self._exception concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.

I attached the full error log. fmriprep error log MOE.txt

Here are the outputs of fMRIPrep that I end up with: MOE_out └── fmriprep ├── dataset_description.json ├── logs │   ├── CITATION.bib │   ├── CITATION.html │   ├── CITATION.md │   └── CITATION.tex ├── sub-MOE345 │   ├── anat │   │   ├── sub-MOE345_desc-brain_mask.json │   │   ├── sub-MOE345_desc-brain_mask.nii.gz │   │   ├── sub-MOE345_desc-preproc_T1w.json │   │   ├── sub-MOE345_desc-preproc_T1w.nii.gz │   │   ├── sub-MOE345_dseg.nii.gz │   │   ├── sub-MOE345_from-MNI152NLin2009cAsym_to-T1w_mode-image_xfm.h5 │   │   ├── sub-MOE345_from-T1w_to-MNI152NLin2009cAsym_mode-image_xfm.h5 │   │   ├── sub-MOE345_from-orig_to-T1w_mode-image_xfm.txt │   │   ├── sub-MOE345_label-CSF_probseg.nii.gz │   │   ├── sub-MOE345_label-GM_probseg.nii.gz │   │   ├── sub-MOE345_label-WM_probseg.nii.gz │   │   ├── sub-MOE345_space-MNI152NLin2009cAsym_desc-brain_mask.json │   │   ├── sub-MOE345_space-MNI152NLin2009cAsym_desc-brain_mask.nii.gz │   │   ├── sub-MOE345_space-MNI152NLin2009cAsym_desc-preproc_T1w.json │   │   ├── sub-MOE345_space-MNI152NLin2009cAsym_desc-preproc_T1w.nii.gz │   │   ├── sub-MOE345_space-MNI152NLin2009cAsym_dseg.nii.gz │   │   ├── sub-MOE345_space-MNI152NLin2009cAsym_label-CSF_probseg.nii.gz │   │   ├── sub-MOE345_space-MNI152NLin2009cAsym_label-GM_probseg.nii.gz │   │   └── sub-MOE345_space-MNI152NLin2009cAsym_label-WM_probseg.nii.gz │   ├── figures │   │   ├── sub-MOE345_seg_brainmask.svg │   │   ├── sub-MOE345_t1_2_mni.svg │   │   ├── sub-MOE345_task-rest_flirtbbr.svg │   │   └── sub-MOE345_task-rest_sdc_syn.svg │   └── func │   ├── sub-MOE345_task-rest_space-MNI152NLin2009cAsym_desc-brain_mask.json │   └── sub-MOE345_task-rest_space-MNI152NLin2009cAsym_desc-brain_mask.nii.gz └── sub-MOE345.html

6 directories, 31 files

I'd greatly appreciate any help with figuring out what the problem is!

Thank you so much! Olena

effigies commented 5 years ago

This is a memory error. Try rerunning with the same working directory, and you'll often be able to resume without hitting the same memory bottleneck.

ghost commented 5 years ago

Thanks for your response, Chris! I re-ran the code as you suggested, and based on the outputs and the html log file, it seems that fMRIPrep was able to finish successfully. However, I did get the following error:

[Node] Finished "fmriprep_wf.single_subject_MOE345_wf.func_preproc_task_rest_wf.bold_mni_trans_wf.bold_reference_wf.enhance_and_skullstrip_bold_wf.apply_mask". Error in atexit._run_exitfuncs: Traceback (most recent call last): File "/usr/local/miniconda/lib/python3.7/concurrent/futures/process.py", line 101, in _python_exit thread_wakeup.wakeup() File "/usr/local/miniconda/lib/python3.7/concurrent/futures/process.py", line 89, in wakeup self._writer.send_bytes(b"") File "/usr/local/miniconda/lib/python3.7/multiprocessing/connection.py", line 183, in send_bytes self._check_closed() File "/usr/local/miniconda/lib/python3.7/multiprocessing/connection.py", line 136, in _check_closed raise OSError("handle is closed") OSError: handle is closed Sentry is attempting to send 0 pending error messages Waiting up to 2.0 seconds Press Ctrl-C to quit

Is this something I should be concerned about?

Also, these are the outputs of fmriprep: MOE_out └── fmriprep ├── dataset_description.json ├── logs │   ├── CITATION.bib │   ├── CITATION.html │   ├── CITATION.md │   └── CITATION.tex ├── sub-MOE345 │   ├── anat │   │   ├── sub-MOE345_desc-brain_mask.json │   │   ├── sub-MOE345_desc-brain_mask.nii.gz │   │   ├── sub-MOE345_desc-preproc_T1w.json │   │   ├── sub-MOE345_desc-preproc_T1w.nii.gz │   │   ├── sub-MOE345_dseg.nii.gz │   │   ├── sub-MOE345_from-MNI152NLin2009cAsym_to-T1w_mode-image_xfm.h5 │   │   ├── sub-MOE345_from-T1w_to-MNI152NLin2009cAsym_mode-image_xfm.h5 │   │   ├── sub-MOE345_from-orig_to-T1w_mode-image_xfm.txt │   │   ├── sub-MOE345_label-CSF_probseg.nii.gz │   │   ├── sub-MOE345_label-GM_probseg.nii.gz │   │   ├── sub-MOE345_label-WM_probseg.nii.gz │   │   ├── sub-MOE345_space-MNI152NLin2009cAsym_desc-brain_mask.json │   │   ├── sub-MOE345_space-MNI152NLin2009cAsym_desc-brain_mask.nii.gz │   │   ├── sub-MOE345_space-MNI152NLin2009cAsym_desc-preproc_T1w.json │   │   ├── sub-MOE345_space-MNI152NLin2009cAsym_desc-preproc_T1w.nii.gz │   │   ├── sub-MOE345_space-MNI152NLin2009cAsym_dseg.nii.gz │   │   ├── sub-MOE345_space-MNI152NLin2009cAsym_label-CSF_probseg.nii.gz │   │   ├── sub-MOE345_space-MNI152NLin2009cAsym_label-GM_probseg.nii.gz │   │   └── sub-MOE345_space-MNI152NLin2009cAsym_label-WM_probseg.nii.gz │   ├── figures │   │   ├── sub-MOE345_seg_brainmask.svg │   │   ├── sub-MOE345_t1_2_mni.svg │   │   ├── sub-MOE345_task-rest_carpetplot.svg │   │   ├── sub-MOE345_task-rest_flirtbbr.svg │   │   ├── sub-MOE345_task-rest_rois.svg │   │   └── sub-MOE345_task-rest_sdc_syn.svg │   └── func │   ├── sub-MOE345_task-rest_desc-confounds_regressors.tsv │   ├── sub-MOE345_task-rest_space-MNI152NLin2009cAsym_boldref.nii.gz │   ├── sub-MOE345_task-rest_space-MNI152NLin2009cAsym_desc-brain_mask.json │   ├── sub-MOE345_task-rest_space-MNI152NLin2009cAsym_desc-brain_mask.nii.gz │   ├── sub-MOE345_task-rest_space-MNI152NLin2009cAsym_desc-preproc_bold.json │   └── sub-MOE345_task-rest_space-MNI152NLin2009cAsym_desc-preproc_bold.nii.gz └── sub-MOE345.html

6 directories, 37 files

Also, I keep running into the same memory bottlenecks with 16G RAM (all of which docker has access to), so I have to rerun fMRIprep every time it freezes. Is there any way around this problem?

Thanks so much for taking the time to respond! Olena