Closed madisoth closed 2 years ago
Hey @madisoth. I'm with Damien right now. Quick question, are you applying the same resolution field map to the exact same resolution data? Or is there a discrepancy between fmap sizing and resolution of functional data?
Same resolution fieldmap and BOLD timeseries
On Tue, Nov 23, 2021 at 2:25 PM Amanda Rueter @.***> wrote:
Hey @madisoth https://github.com/madisoth. I'm with Damien right now. Quick question, are you applying the same resolution field map to the exact same resolution data? Or is there a discrepancy between fmap sizing and resolution of functional data?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/nipreps/sdcflows/issues/250#issuecomment-977227003, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANQ4AOAM4FP5XCLAHNHOHP3UNQIFFANCNFSM5ISNSK4A . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
-- Thomas Madison Research Professional | Developmental Cognition and Neuroimaging Labs (DCAN) | Masonic Institute for the Developing Brain https://orcid.org/0000-0003-3030-6580
Possibly duplicates - https://github.com/nipreps/fmriprep/issues/2628#issuecomment-975719689
@madisoth could you confirm whether the voxel sizes of the coefficients generated by topup are 1mm?
Hey @madisoth, we've just pushed nipreps/fmriprep:pepolar-2628
to dockerhub. Could you try that image out with these data?
@oesteban I can confirm the coefficients file had "1 mm" voxels where I had the errors with 3.0 and 4.0mm fmaps, and "2 mm" voxels with 1.6 and 2.0 mm fieldmaps.
Will test the new build over the weekend, thank you!
Tested the new build and still getting an error in the same spot:
[Node] Finished "fmriprep_wf.single_subject_<SUB>_wf.fmap_preproc_wf.wf_auto_00002.topup".
211124-13:16:53,28 nipype.workflow WARNING:
Storing result file without outputs
211124-13:16:53,35 nipype.workflow WARNING:
[Node] Error on "fmriprep_wf.single_subject_<SUB>_wf.fmap_preproc_wf.wf_auto_00002.fix_coeff" (/wd/fmriprep_wf/single_subject_<SUB>_wf/fmap_preproc_wf/wf_auto_00002/fix_coeff)
211124-13:16:53,41 nipype.workflow ERROR:
Node fix_coeff failed to run on host <HOST>.
211124-13:16:53,50 nipype.workflow ERROR:
Saving crash info to /out/sub-<SUB>/log/20211124-121850_6b196eeb-d722-45f5-ab0e-432a97e5e4b7/crash-20211124-131653-tmadison-fix_coeff-d946738a-1268-4e8f-a3d0-a6bff079da26.txt
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 344, in _send_procs_to_workers
self.procs[jobid].run(updatehash=updatehash)
File "/opt/conda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 516, in run
result = self._run_interface(execute=True)
File "/opt/conda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 635, in _run_interface
return self._run_command(execute)
File "/opt/conda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 741, in _run_command
result = self._interface.run(cwd=outdir)
File "/opt/conda/lib/python3.8/site-packages/nipype/interfaces/base/core.py", line 428, in run
runtime = self._run_interface(runtime)
File "/opt/conda/lib/python3.8/site-packages/sdcflows/interfaces/bspline.py", line 420, in _run_interface
self._results["out_coeff"] = [
File "/opt/conda/lib/python3.8/site-packages/sdcflows/interfaces/bspline.py", line 422, in <listcomp>
_fix_topup_fieldcoeff(
File "/opt/conda/lib/python3.8/site-packages/sdcflows/interfaces/bspline.py", line 481, in _fix_topup_fieldcoeff
raise ValueError(
ValueError: Shape of coefficients file [64 64 36] does not meet the expected shape [67. 67. 39.] (toupup factors are [1. 1. 1.]).
Should this be "factors > 1.0" (instead of ">= 1.0")?
Hi @oesteban
I've tried nipreps/fmriprep:pepolar-2628 and got a similar error.
''' File: /lscratch/27354946/sXXXX_ses-1.out/sub-sXXXX/log/20211124-104913_91a0b695-7670-4b42-9ceb-bdf50fda6cf5/crash-20211124-121716-zugmana2-fix_coeff-0b8f0538-28a7-4684-afa6-a3af36dbc57a.txt Working Directory: /lscratch/27354946/sXXXX_ses-1.wrk/fmriprep_wf/single_subject_sXXXX_wf/fmap_preproc_wf/wf_ME_REST/fix_coeff Inputs:
fmap_ref: /lscratch/27354946/sXXXX_ses-1.wrk/fmriprep_wf/single_subject_sXXXX_wf/fmap_preproc_wf/wf_ME_REST/flatten/sub-sXXXX_ses-1_dir-opposite_epi_idx-000.nii.gz
in_coeff: ['/lscratch/27354946/sXXXX_ses-1.wrk/fmriprep_wf/single_subject_sXXXX_wf/fmap_preproc_wf/wf_ME_REST/topup/sub-sXXXX_ses-1_dir-opposite_epi_idx-000_merged_base_fieldcoef.nii.gz']
pe_dir: j-
Traceback (most recent call last): File "/opt/conda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 344, in _send_procs_to_workers self.procs[jobid].run(updatehash=updatehash) File "/opt/conda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 516, in run result = self._run_interface(execute=True) File "/opt/conda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 635, in _run_interface return self._run_command(execute) File "/opt/conda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 741, in _run_command result = self._interface.run(cwd=outdir) File "/opt/conda/lib/python3.8/site-packages/nipype/interfaces/base/core.py", line 428, in run runtime = self._run_interface(runtime) File "/opt/conda/lib/python3.8/site-packages/sdcflows/interfaces/bspline.py", line 420, in _run_interface self._results["out_coeff"] = [ File "/opt/conda/lib/python3.8/site-packages/sdcflows/interfaces/bspline.py", line 422, in _fix_topup_fieldcoeff( File "/opt/conda/lib/python3.8/site-packages/sdcflows/interfaces/bspline.py", line 481, in _fix_topup_fieldcoeff raise ValueError( ValueError: Shape of coefficients file [64 64 34] does not meet the expected shape [67. 67. 37.] (toupup factors are [1. 1. 1.]). '''
Forwarded here from nipreps/fmriprep#2641.
Test of branch nipreps/fmriprep:pepolar-2628
yielded the following unrelated error:
211130-12:02:57,600 nipype.workflow WARNING:
[Node] Error on "fmriprep_wf.single_subject_01_wf.func_preproc_task_rest_echo_1_wf.bold_t2smap_wf.t2smap_node" (/scratch/hb93/rob_testing_fmri/work/fmriprep_wf/single_subject_01_wf/func_preproc_task_rest_echo_1_wf/bold_t2smap_wf/t2smap_node)
211130-12:02:58,393 nipype.workflow ERROR:
Node t2smap_node failed to run on host m3i037.
211130-12:02:58,514 nipype.workflow ERROR:
Saving crash info to /scratch/hb93/rob_testing_fmri/pepolar-2628/sub-01/log/20211130-103646_36dc7ba1-1274-4f30-8e50-16a68d12240b/crash-20211130-120258-robertes-t2smap_node-54558758-10d9-4ef2-a737-346d69e4c16e.txt
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
result["result"] = node.run(updatehash=updatehash)
File "/opt/conda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 516, in run
result = self._run_interface(execute=True)
File "/opt/conda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 635, in _run_interface
return self._run_command(execute)
File "/opt/conda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 741, in _run_command
result = self._interface.run(cwd=outdir)
File "/opt/conda/lib/python3.8/site-packages/nipype/interfaces/base/core.py", line 428, in run
runtime = self._run_interface(runtime)
File "/opt/conda/lib/python3.8/site-packages/nipype/interfaces/base/core.py", line 822, in _run_interface
self.raise_exception(runtime)
File "/opt/conda/lib/python3.8/site-packages/nipype/interfaces/base/core.py", line 749, in raise_exception
raise RuntimeError(
RuntimeError: Command:
t2smap -d /scratch/hb93/rob_testing_fmri/work/fmriprep_wf/single_subject_01_wf/func_preproc_task_rest_echo_1_wf/unwarp_wf/_bold_file_..scratch..hb93..rob_testing_fmri..BIDS..sub-01..func..sub-01_task-rest_echo-1_bold.nii.gz_name_source_..scratch..hb93..rob_testing_fmri..BIDS..sub-01..func..sub-01_task-rest_echo-1_bold.nii.gz/merge/vol0000_unwarped_merged.nii.gz /scratch/hb93/rob_testing_fmri/work/fmriprep_wf/single_subject_01_wf/func_preproc_task_rest_echo_1_wf/unwarp_wf/_bold_file_..scratch..hb93..rob_testing_fmri..BIDS..sub-01..func..sub-01_task-rest_echo-2_bold.nii.gz_name_source_..scratch..hb93..rob_testing_fmri..BIDS..sub-01..func..sub-01_task-rest_echo-2_bold.nii.gz/merge/vol0000_unwarped_merged.nii.gz /scratch/hb93/rob_testing_fmri/work/fmriprep_wf/single_subject_01_wf/func_preproc_task_rest_echo_1_wf/unwarp_wf/_bold_file_..scratch..hb93..rob_testing_fmri..BIDS..sub-01..func..sub-01_task-rest_echo-3_bold.nii.gz_name_source_..scratch..hb93..rob_testing_fmri..BIDS..sub-01..func..sub-01_task-rest_echo-3_bold.nii.gz/merge/vol0000_unwarped_merged.nii.gz -e 15.0 33.25 51.5 --mask /scratch/hb93/rob_testing_fmri/work/fmriprep_wf/single_subject_01_wf/func_preproc_task_rest_echo_1_wf/unwarp_wf/brainextraction_wf/_bold_file_..scratch..hb93..rob_testing_fmri..BIDS..sub-01..func..sub-01_task-rest_echo-1_bold.nii.gz_name_source_..scratch..hb93..rob_testing_fmri..BIDS..sub-01..func..sub-01_task-rest_echo-1_bold.nii.gz/masker/clipped_mask.nii.gz --fittype curvefit
Standard output:
Standard error:
/opt/conda/lib/python3.8/site-packages/nilearn/datasets/__init__.py:93: FutureWarning: Fetchers from the nilearn.datasets module will be updated in version 0.9 to return python strings instead of bytes and Pandas dataframes instead of Numpy arrays.
warn("Fetchers from the nilearn.datasets module will be "
INFO t2smap:t2smap_workflow:229 Using output directory: /scratch/hb93/rob_testing_fmri/work/fmriprep_wf/single_subject_01_wf/func_preproc_task_rest_echo_1_wf/bold_t2smap_wf/t2smap_node
INFO t2smap:t2smap_workflow:239 Loading input data: ['/scratch/hb93/rob_testing_fmri/work/fmriprep_wf/single_subject_01_wf/func_preproc_task_rest_echo_1_wf/unwarp_wf/_bold_file_..scratch..hb93..rob_testing_fmri..BIDS..sub-01..func..sub-01_task-rest_echo-1_bold.nii.gz_name_source_..scratch..hb93..rob_testing_fmri..BIDS..sub-01..func..sub-01_task-rest_echo-1_bold.nii.gz/merge/vol0000_unwarped_merged.nii.gz', '/scratch/hb93/rob_testing_fmri/work/fmriprep_wf/single_subject_01_wf/func_preproc_task_rest_echo_1_wf/unwarp_wf/_bold_file_..scratch..hb93..rob_testing_fmri..BIDS..sub-01..func..sub-01_task-rest_echo-2_bold.nii.gz_name_source_..scratch..hb93..rob_testing_fmri..BIDS..sub-01..func..sub-01_task-rest_echo-2_bold.nii.gz/merge/vol0000_unwarped_merged.nii.gz', '/scratch/hb93/rob_testing_fmri/work/fmriprep_wf/single_subject_01_wf/func_preproc_task_rest_echo_1_wf/unwarp_wf/_bold_file_..scratch..hb93..rob_testing_fmri..BIDS..sub-01..func..sub-01_task-rest_echo-3_bold.nii.gz_name_source_..scratch..hb93..rob_testing_fmri..BIDS..sub-01..func..sub-01_task-rest_echo-3_bold.nii.gz/merge/vol0000_unwarped_merged.nii.gz']
Killed
Return code: 137
topup
was run, but the problematic fix_coeff
node did not. I'll try re-running a few times in case of stochasticity in the execution order.
I was testing a hotfix of pepolar-2628
with the change I mentioned at the end of my last post, on the same data as before and got past topup without error, but then processing stalled for 48 hours and timed out during t2smap. Figured it could have been holiday or system maintenance-related, but after seeing @Lestropie 's error I'm less sure now:
[Node] Setting-up "fmriprep_wf.single_subject_<SUB>_wf.func_preproc_ses_<SES>_task_rest_acq_3p0noNORDIC_run_1_echo_1_wf.bold_t2smap_wf.t2smap_node" in "/wd/fmriprep_wf/single_subject_<SUB>_wf/func_preproc_ses_<SES>_task_rest_acq_3p0noNORDIC_run_1_echo_1_wf/bold_t2smap_wf/t2smap_node".
211127-08:00:01,53 nipype.workflow INFO:
[Node] Running "t2smap_node" ("fmriprep.interfaces.multiecho.T2SMap"), a CommandLine Interface with command:
t2smap -d /wd/fmriprep_wf/single_subject_<SUB>_wf/func_preproc_ses_<SES>_task_rest_acq_3p0noNORDIC_run_1_echo_1_wf/unwarp_wf/_bold_file_..input..sub-<SUB>..ses-<SES>..func..sub-<SUB>_ses-<SES>_task-rest_acq-3p0noNORDIC_run-1_echo-1_bold.nii.gz_name_source_..input..sub-<SUB>..ses-<SES>..func..sub-<SUB>_ses-<SES>_task-rest_acq-3p0noNORDIC_run-1_echo-1_bold.nii.gz/merge/vol0000_unwarped_merged.nii.gz /wd/fmriprep_wf/single_subject_<SUB>_wf/func_preproc_ses_<SES>_task_rest_acq_3p0noNORDIC_run_1_echo_1_wf/unwarp_wf/_bold_file_..input..sub-<SUB>..ses-<SES>..func..sub-<SUB>_ses-<SES>_task-rest_acq-3p0noNORDIC_run-1_echo-2_bold.nii.gz_name_source_..input..sub-<SUB>..ses-<SES>..func..sub-<SUB>_ses-<SES>_task-rest_acq-3p0noNORDIC_run-1_echo-2_bold.nii.gz/merge/vol0000_unwarped_merged.nii.gz /wd/fmriprep_wf/single_subject_<SUB>_wf/func_preproc_ses_<SES>_task_rest_acq_3p0noNORDIC_run_1_echo_1_wf/unwarp_wf/_bold_file_..input..sub-<SUB>..ses-<SES>..func..sub-<SUB>_ses-<SES>_task-rest_acq-3p0noNORDIC_run-1_echo-3_bold.nii.gz_name_source_..input..sub-<SUB>..ses-<SES>..func..sub-<SUB>_ses-<SES>_task-rest_acq-3p0noNORDIC_run-1_echo-3_bold.nii.gz/merge/vol0000_unwarped_merged.nii.gz /wd/fmriprep_wf/single_subject_<SUB>_wf/func_preproc_ses_<SES>_task_rest_acq_3p0noNORDIC_run_1_echo_1_wf/unwarp_wf/_bold_file_..input..sub-<SUB>..ses-<SES>..func..sub-<SUB>_ses-<SES>_task-rest_acq-3p0noNORDIC_run-1_echo-4_bold.nii.gz_name_source_..input..sub-<SUB>..ses-<SES>..func..sub-<SUB>_ses-<SES>_task-rest_acq-3p0noNORDIC_run-1_echo-4_bold.nii.gz/merge/vol0000_unwarped_merged.nii.gz /wd/fmriprep_wf/single_subject_<SUB>_wf/func_preproc_ses_<SES>_task_rest_acq_3p0noNORDIC_run_1_echo_1_wf/unwarp_wf/_bold_file_..input..sub-<SUB>..ses-<SES>..func..sub-<SUB>_ses-<SES>_task-rest_acq-3p0noNORDIC_run-1_echo-5_bold.nii.gz_name_source_..input..sub-<SUB>..ses-<SES>..func..sub-<SUB>_ses-<SES>_task-rest_acq-3p0noNORDIC_run-1_echo-5_bold.nii.gz/merge/vol0000_unwarped_merged.nii.gz -e 14.6 33.84 53.080000000000005 72.32 91.56 --mask /wd/fmriprep_wf/single_subject_<SUB>_wf/func_preproc_ses_<SES>_task_rest_acq_3p0noNORDIC_run_1_echo_1_wf/unwarp_wf/brainextraction_wf/_bold_file_..input..sub-<SUB>..ses-<SES>..func..sub-<SUB>_ses-<SES>_task-rest_acq-3p0noNORDIC_run-1_echo-1_bold.nii.gz_name_source_..input..sub-<SUB>..ses-<SES>..func..sub-<SUB>_ses-<SES>_task-rest_acq-3p0noNORDIC_run-1_echo-1_bold.nii.gz/masker/clipped_mask.nii.gz --fittype curvefit
Might be different manifestations of an underlying memory error:
slurmstepd: error: Detected 1 oom-kill event(s) in step 23256997.batch cgroup. Some of your processes may have been killed by the cgroup out-of-memory handler.
(note that this is the same SLURM job ID as my post above)
May require a separate discussion back at fmriprep
.
@Lestropie it seems that the node getting fmriprep killed is t2smap_node
rather than fix_coeff
, but I know fix_coeff
is a memory hog, so it could still be the problem. Have you managed to get past t2smap_node
by allocating more memory or reducing the parallelism with a lower --nthreads
?
Switching from 16G & 4 threads to 32 GB & 2 threads resulted in execution success for me.
Will cease that particular thread here since it's not strictly related to the issue title, but it may be worth considering revision of software documentation (here and/or in software for which it is a dependency) to reflect this requirement.
(Tagging #154 as this appears to be related.)
In bspline.py, _fix_topup_fieldcoeff assumes the fieldcoeff map output by FSL topup is padded by 3 voxels:
But the 3-voxel padding does not seem to be applied when the knot spacing is one voxel, resulting in an error due to discrepancy between expected and actual shapes of the coeff and ref volumes (see error below). Assuming the documentation of
--warpres
athttp://ftp.nmr.mgh.harvard.edu/pub/dist/freesurfer/tutorial_packages/centos6/fsl_507/doc/wiki/topup(2f)TopupUsersGuide.html
is accurate for FSL 5.0.11, this would occur when the smallest dimension of the input image (
--imain
) is greater than 1/2 the final knot spacing (--warpres
).On fMRIPrep 21.0.0rc2 I have had this error with 3.0 and 4.0 mm pepolar fmaps, but not 1.6 and 2.0 mm fmaps. (Final warpres in b02b0.cnf is 4 mm).
For the below error, the reference fmap was 74 x 74 x 48, 3.0 mm isotropic.
I'm not especially well-versed in NiPreps or FSL, so I would greatly appreciate confirmation whether this is an accurate diagnosis of the issue. Thanks!