nipreps / fmriprep

fMRIPrep is a robust and easy-to-use pipeline for preprocessing of diverse fMRI data. The transparent workflow dispenses of manual intervention, thereby ensuring the reproducibility of the results.
https://fmriprep.org
Apache License 2.0
630 stars 292 forks source link

Do you need to find a computer with better memory. #2491

Closed Kang1448 closed 3 years ago

Kang1448 commented 3 years ago
return self.__get_result()

File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result raise self._exception concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending. 210804-09:17:28,52 nipype.workflow ERROR: Issue encountered while running fMRIPrep. Usually, this is a result of insufficent memory. Ensure Docker has sufficent resources available (https://docs.docker.com/docker-for-mac/#resources) 210804-09:17:28,109 nipype.workflow CRITICAL: fMRIPrep failed: A child process terminated abruptly, the process pool is not usable anymore Sentry is attempting to send 4 pending error messages Waiting up to 2 seconds Press Ctrl-C to quit

What version of fMRIPrep are you using? 20.2.1

What kind of installation are you using? Containers (Singularity, Docker), or "bare-metal"? Docker

What is the exact command-line you used? bash fmriprep.sh Have you checked that your inputs are BIDS valid? Yes

Are you reusing previously computed results (e.g., FreeSurfer, Anatomical derivatives, work directory of previous run)? Nope

Summary Subject ID: 01 Structural images: 1 T1-weighted Functional series: 1 Task: SST (1 run) Standard output spaces: MNI152NLin2009cAsym Non-standard output spaces: FreeSurfer reconstruction: Not run Anatomical Anatomical Conformation Input T1w images: 1 Output orientation: RAS Output dimensions: 160x240x256 Output voxel size: 1mm x 1mm x 1mm Discarded images: 0 Functional Reports for: task SST, run 1. Estimated fieldmap and alignment to the corresponding EPI reference The estimated fieldmap was aligned to the corresponding EPI reference with a rigid-registration process of the magintude part of the fieldmap, using antsRegistration. Overlaid on top of the co-registration results, the displacements along the phase-encoding direction are represented in arbitrary units. Please note that the color scale is centered around zero (i.e. full transparency), but the extremes might be different (i.e., the maximum of red colors could be orders of magnitude above or below the minimum of blue colors.)

Get figure file: sub-01/figures/sub-01_task-SST_run-1_desc-fieldmap_bold.svg About fMRIPrep version: 20.2.3 fMRIPrep command: /usr/local/miniconda/bin/fmriprep /data /out participant --participant-label 01 --skip-bids-validation --md-only-boilerplate --fs-no-reconall --nthreads 16 --stop-on-first-crash --mem_mb 59000 -w /scratch --output-spaces MNI152NLin2009cAsym:res-2 Date preprocessed: 2021-08-04 05:10:16 +0000 Methods We kindly ask to report results preprocessed with this tool using the following boilerplate.

Markdown Results included in this manuscript come from preprocessing performed using fMRIPrep 20.2.3 (@fmriprep1; @fmriprep2; RRID:SCR_016216), which is based on Nipype 1.6.1 (@nipype1; @nipype2; RRID:SCR_002502).

Anatomical data preprocessing

: A total of 1 T1-weighted (T1w) images were found within the input BIDS dataset.The T1-weighted (T1w) image was corrected for intensity non-uniformity (INU) with N4BiasFieldCorrection [@n4], distributed with ANTs 2.3.3 [@ants, RRID:SCR_004757], and used as T1w-reference throughout the workflow. The T1w-reference was then skull-stripped with a Nipype implementation of the antsBrainExtraction.sh workflow (from ANTs), using OASIS30ANTs as target template. Brain tissue segmentation of cerebrospinal fluid (CSF), white-matter (WM) and gray-matter (GM) was performed on the brain-extracted T1w using fast [FSL 5.0.9, RRID:SCR_002823, @fsl_fast]. Volume-based spatial normalization to one standard space (MNI152NLin2009cAsym) was performed through nonlinear registration with antsRegistration (ANTs 2.3.3), using brain-extracted versions of both T1w reference and the T1w template. The following template was selected for spatial normalization: ICBM 152 Nonlinear Asymmetrical template version 2009c [@mni152nlin2009casym, RRID:SCR_008796; TemplateFlow ID: MNI152NLin2009cAsym],

Functional data preprocessing

: For each of the 1 BOLD runs found per subject (across all tasks and sessions), the following preprocessing was performed. First, a reference volume and its skull-stripped version were generated using a custom methodology of fMRIPrep. A B0-nonuniformity map (or fieldmap) was estimated based on a phase-difference map calculated with a dual-echo GRE (gradient-recall echo) sequence, processed with a custom workflow of SDCFlows inspired by the epidewarp.fsl script and further improvements in HCP Pipelines [@hcppipelines]. The fieldmap was then co-registered to the target EPI (echo-planar imaging) reference run and converted to a displacements field map (amenable to registration tools such as ANTs) with FSL's fugue and other SDCflows tools. Based on the estimated susceptibility distortion, a corrected EPI (echo-planar imaging) reference was calculated for a more accurate co-registration with the anatomical reference. The BOLD reference was then co-registered to the T1w reference using flirt [FSL 5.0.9, @flirt] with the boundary-based registration [@bbr] cost-function. Co-registration was configured with nine degrees of freedom to account for distortions remaining in the BOLD reference. Head-motion parameters with respect to the BOLD reference (transformation matrices, and six corresponding rotation and translation parameters) are estimated before any spatiotemporal filtering using mcflirt [FSL 5.0.9, @mcflirt]. BOLD runs were slice-time corrected using 3dTshift from AFNI 20160207 [@afni, RRID:SCR_005927]. The BOLD time-series (including slice-timing correction when applied) were resampled onto their original, native space by applying a single, composite transform to correct for head-motion and susceptibility distortions. These resampled BOLD time-series will be referred to as preprocessed BOLD in original space, or just preprocessed BOLD. The BOLD time-series were resampled into standard space, generating a preprocessed BOLD run in MNI152NLin2009cAsym space. First, a reference volume and its skull-stripped version were generated using a custom methodology of fMRIPrep. Several confounding time-series were calculated based on the preprocessed BOLD: framewise displacement (FD), DVARS and three region-wise global signals. FD was computed using two formulations following Power (absolute sum of relative motions, @power_fd_dvars) and Jenkinson (relative root mean square displacement between affines, @mcflirt). FD and DVARS are calculated for each functional run, both using their implementations in Nipype [following the definitions by @power_fd_dvars]. The three global signals are extracted within the CSF, the WM, and the whole-brain masks. Additionally, a set of physiological regressors were extracted to allow for component-based noise correction [CompCor, @compcor]. Principal components are estimated after high-pass filtering the preprocessed BOLD time-series (using a discrete cosine filter with 128s cut-off) for the two CompCor variants: temporal (tCompCor) and anatomical (aCompCor). tCompCor components are then calculated from the top 2% variable voxels within the brain mask. For aCompCor, three probabilistic masks (CSF, WM and combined CSF+WM) are generated in anatomical space. The implementation differs from that of Behzadi et al. in that instead of eroding the masks by 2 pixels on BOLD space, the aCompCor masks are subtracted a mask of pixels that likely contain a volume fraction of GM. This mask is obtained by thresholding the corresponding partial volume map at 0.05, and it ensures components are not extracted from voxels containing a minimal fraction of GM. Finally, these masks are resampled into BOLD space and binarized by thresholding at 0.99 (as in the original implementation). Components are also calculated separately within the WM and CSF masks. For each CompCor decomposition, the k components with the largest singular values are retained, such that the retained components' time series are sufficient to explain 50 percent of variance across the nuisance mask (CSF, WM, combined, or temporal). The remaining components are dropped from consideration. The head-motion estimates calculated in the correction step were also placed within the corresponding confounds file. The confound time series derived from head motion estimates and global signals were expanded with the inclusion of temporal derivatives and quadratic terms for each [@confounds_satterthwaite_2013]. Frames that exceeded a threshold of 0.5 mm FD or 1.5 standardised DVARS were annotated as motion outliers. All resamplings can be performed with a single interpolation step by composing all the pertinent transformations (i.e. head-motion transform matrices, susceptibility distortion correction when available, and co-registrations to anatomical and output spaces). Gridded (volumetric) resamplings were performed using antsApplyTransforms (ANTs), configured with Lanczos interpolation to minimize the smoothing effects of other kernels [@lanczos]. Non-gridded (surface) resamplings were performed using mri_vol2surf (FreeSurfer).

Many internal operations of fMRIPrep use Nilearn 0.6.2 [@nilearn, RRID:SCR_001362], mostly within the functional processing workflow. For more details of the pipeline, see the section corresponding to workflows in fMRIPrep's documentation.

Copyright Waiver

The above boilerplate text was automatically generated by fMRIPrep with the express intention that users should copy and paste this text into their manuscripts unchanged. It is released under the CC0 license.

References

Errors No errors to report!

oesteban commented 3 years ago

https://fmriprep.org/en/stable/faq.html#my-fmriprep-run-is-hanging

oesteban commented 3 years ago

Please reopen if your question has not been addressed.

If you reopen, please make sure to add sufficient details to your question (execution log, crashfiles, etc.) so that we can understand the problem and point to a solution.

Kang1448 commented 3 years ago

File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 425, in result return self.get_result() File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 384, in get_result raise self._exception File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 324, in _invoke_callbacks callback(self) File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback result = args.result() File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 425, in result return self.get_result() File "/usr/local/miniconda/lib/python3.7/concurrent/futures/_base.py", line 384, in get_result raise self._exception concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.

CITATION.md

Kang1448 commented 3 years ago

Please see the log file.

Kang1448 commented 3 years ago

Hello, I have attached the log file in the original thread.

Best wishes, Weixi

On Fri, Aug 6, 2021 at 6:04 PM Oscar Esteban @.***> wrote:

Please reopen if your question has not been addressed.

If you reopen, please make sure to add sufficient details to your question (execution log, crashfiles, etc.) so that we can understand the problem and point to a solution.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/nipreps/fmriprep/issues/2491#issuecomment-894152661, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQGCTL34UHB44M2KCX2NLZ3T3OXSJANCNFSM5BQVQCNA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email .