nipreps / fmriprep

fMRIPrep is a robust and easy-to-use pipeline for preprocessing of diverse fMRI data. The transparent workflow dispenses of manual intervention, thereby ensuring the reproducibility of the results.
https://fmriprep.org
Apache License 2.0
631 stars 293 forks source link

Execution hangs infinitely #534

Closed chrisgorgo closed 7 years ago

chrisgorgo commented 7 years ago

UH2 subject s513

/usr/local/miniconda/lib/python3.6/site-packages/nipype/workflows/dmri/mrtrix/group_connectivity.py:16: UserWarning: cmp not installed
  warnings.warn('cmp not installed')
/usr/local/miniconda/lib/python3.6/site-packages/nipype/interfaces/fsl/preprocess.py:1594: UserWarning: This has not been fully tested. Please report any failures.
  warn('This has not been fully tested. Please report any failures.')
/usr/local/miniconda/lib/python3.6/site-packages/nipype/interfaces/base.py:431: UserWarning: Input output_inverse_warped_image requires inputs: output_warped_image
  warn(msg)
/usr/local/miniconda/lib/python3.6/site-packages/nipype/interfaces/base.py:431: UserWarning: Input sampling_percentage requires inputs: sampling_strategy
  warn(msg)
slurmstepd: error: *** JOB 15178952 ON sh-7-21 CANCELLED AT 2017-05-22T21:01:49 DUE TO TIME LIMIT ***
#!/bin/bash
#SBATCH --job-name={sid}-prep
#SBATCH --output=.out/{sid}-prep.job.outi
#SBATCH --error=.err/{sid}-prep.job.err
#SBATCH --time=32:00:00
#SBATCH --mem=120000
#SBATCH --qos=russpold
#SBATCH --mail-type=ALL
#SBATCH --mail-user=ieisenbe@stanford.edu
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=16
module load singularity
export PYTHONPATH=""
singularity run /share/PI/russpold/singularity_images/poldracklab_fmriprep_0.4.3-2017-05-10-a7d3419d33b6.img /scratch/PI/russpold/data/uh2 /scratch/PI/russpold/work/ieisenbe/uh2/fmriprep participant --participant_label {sid} -w $SCRATCH --output-space template T1w

Output log too big to share, but looks good.

oesteban commented 7 years ago

Could it be a problem of n_procs vs. node-level threads?

chrisgorgo commented 7 years ago

Possibly, but I don't quite understand what do you mean exactly.

effigies commented 7 years ago

@oesteban Shouldn't be an issue in 0.4.3.

oesteban commented 7 years ago

Sorry, that was unclear. Basically, is that possible that one process requested more threads than available (n_procs)?. Do we know what point in the execution graph it gets stuck?

EDIT: could you share the paths to logs in slack?

chrisgorgo commented 7 years ago

I believe in such case MultiProc should throw an exception. I can share the outputs over email.