Closed jrdalenberg closed 2 years ago
Could you share your file tree and linking metadata (IntendedFor or B0FieldSource/B0FieldIdentifier)?
.
├── dataset_description.json
├── README
└── sub-p1197
├── anat
│ ├── sub-p1197_T1w.json
│ └── sub-p1197_T1w.nii.gz
├── fmap
│ ├── sub-p1197_dir-PA_run-1_epi.json
│ ├── sub-p1197_dir-PA_run-1_epi.nii.gz
│ ├── sub-p1197_dir-PA_run-2_epi.json
│ └── sub-p1197_dir-PA_run-2_epi.nii.gz
└── func
├── sub-p1197_task-hands1_echo-1_bold.json
├── sub-p1197_task-hands1_echo-1_bold.nii.gz
├── sub-p1197_task-hands1_echo-2_bold.json
├── sub-p1197_task-hands1_echo-2_bold.nii.gz
├── sub-p1197_task-hands1_echo-3_bold.json
├── sub-p1197_task-hands1_echo-3_bold.nii.gz
├── sub-p1197_task-hands1_events.tsv
├── sub-p1197_task-hands2_echo-1_bold.json
├── sub-p1197_task-hands2_echo-1_bold.nii.gz
├── sub-p1197_task-hands2_echo-2_bold.json
├── sub-p1197_task-hands2_echo-2_bold.nii.gz
├── sub-p1197_task-hands2_echo-3_bold.json
├── sub-p1197_task-hands2_echo-3_bold.nii.gz
├── sub-p1197_task-hands2_events.tsv
├── sub-p1197_task-hands3_bold.json
├── sub-p1197_task-hands3_bold.nii.gz
├── sub-p1197_task-hands3_events.tsv
├── sub-p1197_task-rest1_echo-1_bold.json
├── sub-p1197_task-rest1_echo-1_bold.nii.gz
├── sub-p1197_task-rest1_echo-2_bold.json
├── sub-p1197_task-rest1_echo-2_bold.nii.gz
├── sub-p1197_task-rest1_echo-3_bold.json
├── sub-p1197_task-rest1_echo-3_bold.nii.gz
├── sub-p1197_task-rest2_bold.json
└── sub-p1197_task-rest2_bold.nii.gz
{
"Modality": "MR",
"MagneticFieldStrength": 3,
"ImagingFrequency": 123.172,
"Manufacturer": "Siemens",
"ManufacturersModelName": "Prisma",
"SliceThickness": 3.5,
"SpacingBetweenSlices": 3.5,
"SAR": 0.0404926,
"EchoNumber": 1,
"EchoTime": 0.012,
"RepetitionTime": 1.101,
"FlipAngle": 45,
"PartialFourier": 0.75,
"BaseResolution": 64,
"ShimSetting": [
-10836,
-10740,
-1179,
526,
29,
-254,
54,
43 ],
"TxRefAmp": 226.512,
"PhaseResolution": 1,
"ReceiveCoilName": "HeadNeck_64",
"ReceiveCoilActiveElements": "HC1-7",
"PulseSequenceDetails": "%CustomerSeq%_cmrr_mbep2d_bold",
"WipMemBlock": "a6276a2f-f25b-4ad7-920c-d4068b5aa31f||Sequence: R016 ve11c/master r/8780670; Nov 27 2017 15:05:12 by eja",
"ConsistencyInfo": "N4_VE11C_LATEST_20160120",
"MultibandAccelerationFactor": 4,
"PercentPhaseFOV": 100,
"EchoTrainLength": 48,
"PhaseEncodingSteps": 48,
"AcquisitionMatrixPE": 64,
"ReconMatrixPE": 64,
"BandwidthPerPixelPhaseEncode": 31.888,
"EffectiveEchoSpacing": 0.000489996,
"DerivedVendorReportedEchoSpacing": 0.000489996,
"TotalReadoutTime": 0.0308698,
"PixelBandwidth": 2605,
"DwellTime": 3e-06,
"PhaseEncodingDirection": "j",
"SliceTiming": [
0.99,
0.9,
0.81,
0.72,
0.63,
0.54,
0.45,
0.36,
0.27,
0.18,
0.09,
0,
0.99,
0.9,
0.81,
0.72,
0.63,
0.54,
0.45,
0.36,
0.27,
0.18,
0.09,
0,
0.99,
0.9,
0.81,
0.72,
0.63,
0.54,
0.45,
0.36,
0.27,
0.18,
0.09,
0,
0.99,
0.9,
0.81,
0.72,
0.63,
0.54,
0.45,
0.36,
0.27,
0.18,
0.09,
0 ],
"ImageOrientationPatientDICOM": [
0.999938,
-0.000861835,
-0.0110635,
-6.22218e-09,
0.99698,
-0.0776638 ],
"InPlanePhaseEncodingDirectionDICOM": "COL",
"ConversionSoftware": "dcm2niix",
"ConversionSoftwareVersion": "v1.0.20190902",
"IntendedFor": ["func/sub-p1197_task-rest1_echo-1_bold.nii.gz",
"func/sub-p1197_task-rest1_echo-2_bold.nii.gz",
"func/sub-p1197_task-rest1_echo-3_bold.nii.gz",
"func/sub-p1197_task-hands1_echo-1_bold.nii.gz",
"func/sub-p1197_task-hands1_echo-2_bold.nii.gz",
"func/sub-p1197_task-hands1_echo-3_bold.nii.gz",
"func/sub-p1197_task-hands2_echo-1_bold.nii.gz",
"func/sub-p1197_task-hands2_echo-2_bold.nii.gz",
"func/sub-p1197_task-hands2_echo-3_bold.nii.gz"]
}
And sub-p1197_dir-PA_run-2_epi.json
?
I only ran -t hands2
here. I tested -t rest2
(single echo) just now, and it crashes at the same stage.
{
"Modality": "MR",
"MagneticFieldStrength": 3,
"ImagingFrequency": 123.172,
"Manufacturer": "Siemens",
"ManufacturersModelName": "Prisma",
"SliceThickness": 2,
"SpacingBetweenSlices": 2,
"SAR": 0.0472272,
"EchoTime": 0.034,
"RepetitionTime": 1.6,
"FlipAngle": 70,
"PartialFourier": 0.75,
"BaseResolution": 114,
"ShimSetting": [
-10838,
-10744,
-1175,
501,
34,
-224,
48,
48 ],
"TxRefAmp": 226.512,
"PhaseResolution": 1,
"ReceiveCoilName": "HeadNeck_64",
"ReceiveCoilActiveElements": "HC1-6",
"PulseSequenceDetails": "%CustomerSeq%_cmrr_mbep2d_bold",
"WipMemBlock": "2bd175ea-29bd-48ba-b808-821d1099bac7||Sequence: R016 ve11c/master r/8780670; Nov 27 2017 15:05:12 by eja",
"ConsistencyInfo": "N4_VE11C_LATEST_20160120",
"MultibandAccelerationFactor": 4,
"PercentPhaseFOV": 100,
"EchoTrainLength": 86,
"PhaseEncodingSteps": 86,
"AcquisitionMatrixPE": 114,
"ReconMatrixPE": 114,
"BandwidthPerPixelPhaseEncode": 13.495,
"EffectiveEchoSpacing": 0.000650013,
"DerivedVendorReportedEchoSpacing": 0.000650013,
"TotalReadoutTime": 0.0734515,
"PixelBandwidth": 1825,
"DwellTime": 2.4e-06,
"PhaseEncodingDirection": "j",
"SliceTiming": [
1.4925,
1.405,
1.3175,
1.23,
1.1425,
1.055,
0.9675,
0.8775,
0.79,
0.7025,
0.615,
0.5275,
0.44,
0.3525,
0.265,
0.1775,
0.09,
0,
1.4925,
1.405,
1.3175,
1.23,
1.1425,
1.055,
0.9675,
0.8775,
0.79,
0.7025,
0.615,
0.5275,
0.44,
0.3525,
0.265,
0.1775,
0.09,
0,
1.4925,
1.405,
1.3175,
1.23,
1.1425,
1.055,
0.9675,
0.8775,
0.79,
0.7025,
0.615,
0.5275,
0.44,
0.3525,
0.265,
0.1775,
0.09,
0,
1.4925,
1.405,
1.3175,
1.23,
1.1425,
1.055,
0.9675,
0.8775,
0.79,
0.7025,
0.615,
0.5275,
0.44,
0.3525,
0.265,
0.1775,
0.09,
0 ],
"ImageOrientationPatientDICOM": [
0.999927,
-0.00148597,
-0.0120092,
-2.02501e-08,
0.992431,
-0.122801 ],
"InPlanePhaseEncodingDirectionDICOM": "COL",
"ConversionSoftware": "dcm2niix",
"ConversionSoftwareVersion": "v1.0.20190902",
"IntendedFor": ["func/sub-p1197_task-rest2_bold.nii.gz",
"func/sub-p1197_task-hands3_bold.nii.gz"]
}
The PA scans are the exact same protocol as their AP functional run counterparts with the Invert R0/PE polarity option for the 180-degree in-plane rotation activated. As described by the CMRR application guide of the multiband EPI C2P document (Release 016a. 19 december 2017)
Okay, I think I got it. If not, see below the horizontal line for my initial thoughts...
Can you add the following line to run-1_epi.json
:
"B0FieldIdentifier": "MEFieldMap"
and this to run-2_epi.json
:
"B0FieldIdentifier": "SEFieldMap"
And then add the following to your multi-echo BOLD series:
"B0FieldSource": "MEFieldMap"
and this to your single-echo BOLD series:
"B0FieldSource": "SEFieldMap"
The names don't matter, so feel free to change them as you like. Anyway, it looks like when we shifted to our fieldmap-centric preprocessing (as opposed to starting from the BOLD and finding the associated fieldmaps), we made it harder to correctly interpret IntendedFor
. That's going to be a bug that needs fixing, but the fix above should help you get moving on your data.
This, by the way, is a relatively recent addition to the BIDS spec: https://bids-specification.readthedocs.io/en/latest/04-modality-specific-files/01-magnetic-resonance-imaging-data.html#expressing-the-mr-protocol-intent-for-fieldmaps
Yes, this looks like a bug, and it's here:
On L443, sorted(suffices) == ["epi"]
is returning False
, so we need to understand what's going on there. I'm actually on holidays so I won't be able to dig in deeply before January, but if you're willing to fiddle around a bit, you can use fmriprep-docker
with the --patch fmriprep=<fmriprep-repo>/fmriprep
option to be able to put in debug statements and figure out what's going on there. (There are docs for setting up a development environment, but the patching is out-of-date. See wrapper usage.)
If I had to guess, I'd say we're probably picking up both EPI images and getting sorted(suffices) == ["epi", "epi"]
. If that's the case, you could hack this by temporarily removing the inapplicable EPI file. The long term solution is probably going to involve stepping through find_estimators
to figure out why we're getting too many fieldmaps in one estimator.
... but the fix above should help you get moving on your data.
I added the fields as you described and end up with the error below. I am not sure why it's not reading the epi
tag in sdcflows/fieldmaps.py#L306.
I run into the same error for both the ME and SE data.
211217-08:28:31,805 nipype.workflow IMPORTANT:
Building fMRIPrep's workflow:
* BIDS dataset path: /data.
* Participant list: ['p1197'].
* Run identifier: 20211217-082820_716b29ba-c5aa-4eff-8685-1889b892dc29.
* Output spaces: MNI152NLin2009cAsym:res-native.
* Pre-run FreeSurfer's SUBJECTS_DIR: /out/sourcedata/freesurfer.
Process Process-2:
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/opt/conda/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/opt/conda/lib/python3.8/site-packages/fmriprep/cli/workflow.py", line 118, in build_workflow
retval["workflow"] = init_fmriprep_wf()
File "/opt/conda/lib/python3.8/site-packages/fmriprep/workflows/base.py", line 85, in init_fmriprep_wf
single_subject_wf = init_single_subject_wf(subject_id)
File "/opt/conda/lib/python3.8/site-packages/fmriprep/workflows/base.py", line 324, in init_single_subject_wf
fmap_estimators = find_estimators(
File "/opt/conda/lib/python3.8/site-packages/sdcflows/utils/wrangler.py", line 270, in find_estimators
_e = fm.FieldmapEstimation([
File "<attrs generated init sdcflows.fieldmaps.FieldmapEstimation>", line 7, in __init__
self.__attrs_post_init__()
File "/opt/conda/lib/python3.8/site-packages/sdcflows/fieldmaps.py", line 398, in __attrs_post_init__
raise ValueError("Insufficient sources to estimate a fieldmap.")
ValueError: Insufficient sources to estimate a fieldmap.
Yes, you need to specify the opposed gradient direction to estimate a fieldmap.
On the metadata of the SE echos and runs that should be used in the estimation, please add:
"B0FieldIdentifier": "SEFieldMap"
Correspondingly, only on the echos and runs that should be used on the ME acquisition side:
"B0FieldIdentifier": "MEFieldMap"
On L443,
sorted(suffices) == ["epi"]
is returningFalse
, so we need to understand what's going on there. I'm actually on holidays so I won't be able to dig in deeply before January, but if you're willing to fiddle around a bit, you can usefmriprep-docker
with the--patch fmriprep=<fmriprep-repo>/fmriprep
option to be able to put in debug statements and figure out what's going on there. (There are docs for setting up a development environment, but the patching is out-of-date. See wrapper usage.)If I had to guess, I'd say we're probably picking up both EPI images and getting
sorted(suffices) == ["epi", "epi"]
. If that's the case, you could hack this by temporarily removing the inapplicable EPI file. The long term solution is probably going to involve stepping throughfind_estimators
to figure out why we're getting too many fieldmaps in one estimator.
I believe this is expected. When the fieldmaps do not contain both blip directions, then it is really hard to predict what the researcher was thinking w.r.t. fieldmaps when the protocol was designed. The solution requires, as you suggested, the use of B0FieldIdentifier
and B0FieldSource
.
I believe this is expected. When the fieldmaps do not contain both blip directions, then it is really hard to predict what the researcher was thinking w.r.t. fieldmaps when the protocol was designed. The solution requires, as you suggested, the use of
B0FieldIdentifier
andB0FieldSource
.
You might have expected it, but I did not realize that we were disabling a very common fieldmap approach. I think we should restore it, make our backwards-compatible interpretation very clear in the docs, possibly warning in the workflow, and direct people to the new fields if they need something else.
The B0Field*
metadata isn't even in a released version of the spec yet (it's in latest
), so this is going to disable pepolar correction for virtually all OpenNeuro datasets.
The interpretation that I believe is consistent with previous behavior is: If a single blip EPI fieldmap is found, for each BOLD file in its IntendedFor
list, verify the BOLD file has reverse PE direction, and then compute a unique fieldmap from the (EPI, BOLD)
pair.
@oesteban
When the fieldmaps do not contain both blip directions, then it is really hard to predict what the researcher was thinking w.r.t. fieldmaps when the protocol was designed.
When I use topup on this data I use the first 5 AP volumes of sub-p1197_task-hands2_echo-1_bold.nii.gz
and 5 sub-p1197_dir-PA_run-1_epi.json
PA volumes.
Like this:
(Reading this back, it could use a little rework. The hardcoded # of echos for example)
#!/bin/bash
USAGE="Usage: $(basename "$0") [options]
Options:
MANDATORY:
-b: Bids folder for subject (e.g, ./bids)
-s: Subject label (e.g, sub-p1118)
-t: Task label (e.g, rest1)
-d: Derivatives folder (e.g, ./derivatives/meica)
-p: Topup param files (e.g, ./AP_PA_acq_params.txt)
OPTIONAL:
-h: Show this help text"
echo $'
#########################################################
# topup_bash_functions v1.0 by J.R. Dalenberg, Jan 2020 #
#########################################################
\n'
echo "Checking input parameters..."
echo
while getopts 'b:hs:t:d:p:' option; do
case "$option" in
## Folder arguments
b) SOURCEDIR=$OPTARG
[ -d ${SOURCEDIR} ] && echo "Using [$SOURCEDIR] as bids source folder" || { echo ERROR: Folder [$SOURCEDIR] does not exist; echo; echo "$USAGE"; exit; }
;;
s) SUBJECT=$OPTARG
SUBBIDS=${SOURCEDIR}/${SUBJECT}
[ -d ${SUBBIDS} ] && echo "Using [$SUBJECT] as subject label" || { echo ERROR: Folder [${SUBBIDS}] does not exist; echo; echo "$USAGE"; exit; }
;;
d) WORKDIR=$OPTARG
[ -d ${WORKDIR} ] && echo "Using [$WORKDIR] as derivatives folder" || { echo ERROR: Folder [$WORKDIR] does not exist; echo; echo "$USAGE"; exit; }
;;
p) PARAM=$OPTARG
[ -f ${PARAM} ] && echo "Using [$PARAM] as param file" || { echo "ERROR: Param file [$PARAM] does not exist"; echo; echo "$USAGE"; exit; }
;;
t) TASK=$OPTARG
echo "Using [$TASK] as task label"
[ "$(ls ${SOURCEDIR}/${SUBJECT}/func/*${TASK}*echo-1*.gz)" ] && ECHO1=$(ls ${SOURCEDIR}/${SUBJECT}/func/*${TASK}*echo-1*.gz); echo "found echo-1 file at $ECHO1" || { echo "ERROR: Echo file with naming [$ECHO1] does not exist"; echo; echo "$USAGE"; exit; }
[ "$(ls ${SOURCEDIR}/${SUBJECT}/func/*${TASK}*echo-2*.gz)" ] && ECHO2=$(ls ${SOURCEDIR}/${SUBJECT}/func/*${TASK}*echo-2*.gz); echo "found echo-2 file at $ECHO2" || { echo "ERROR: Echo file with naming [$ECHO2] does not exist"; echo; echo "$USAGE"; exit; }
[ "$(ls ${SOURCEDIR}/${SUBJECT}/func/*${TASK}*echo-3*.gz)" ] && ECHO3=$(ls ${SOURCEDIR}/${SUBJECT}/func/*${TASK}*echo-3*.gz); echo "found echo-3 file at $ECHO3" || { echo "ERROR: Echo file with naming [$ECHO3] does not exist"; echo; echo "$USAGE"; exit; }
;;
## Help and error handling
h) echo "$USAGE"
exit
;;
:) printf "missing argument for -%s\n" "$OPTARG" >&2
echo "$USAGE" >&2
exit
;;
\?) printf "illegal option: -%s\n" "$OPTARG" >&2
echo "$USAGE" >&2
exit
;;
esac
done
## Checking absence of params.
if [[ -z $SOURCEDIR || -z $SUBJECT || -z $WORKDIR || -z $PARAM || -z $TASK ]]; then
echo 'ERROR: One or more variables are undefined'
echo
echo "$USAGE" >&2
exit 1
fi
PA_FILE=${SOURCEDIR}/${SUBJECT}/fmap/${SUBJECT}_dir-PA_run-1_epi.nii.gz
[ -f ${PA_FILE} ] && echo "Using [$PA_FILE] as PA file" || { echo "ERROR: PA file [$PA_FILE] does not exist"; echo; echo "$USAGE"; exit; }
shift $((OPTIND - 1))
##Create directories
mkdirs() {
SUBDIR=${WORKDIR}/${SUBJECT}
echo "Creating Subject directories at ${SUBDIR}"
[ -d ${SUBDIR} ] || mkdir ${SUBDIR}
[ -d ${SUBDIR}/topup ] || mkdir ${SUBDIR}/topup
[ -d ${SUBDIR}/func ] || mkdir ${SUBDIR}/func
}
##Create AP and PA 4D reference files
AP_PA_vols() {
echo "Creating AP and PA volumes for ${SUBJECT} task ${TASK}"
fslroi ${ECHO1} ${SUBDIR}/topup/${SUBJECT}_task-${TASK}_B0_AP.nii.gz 0 5
fslroi ${PA_FILE} ${SUBDIR}/topup/${SUBJECT}_task-${TASK}_B0_PA.nii.gz 0 5
}
## Create BO 4D file
merge_files() {
echo "Creating B0 for ${SUBJECT} task ${TASK}"
##Merge AP and PA files
fslmerge -t ${SUBDIR}/topup/${SUBJECT}_task-${TASK}_B0_AP_PA.nii.gz ${SUBDIR}/topup/${SUBJECT}_task-${TASK}_B0_AP.nii.gz ${SUBDIR}/topup/${SUBJECT}_task-${TASK}_B0_PA.nii.gz
}
## Motion Correct to first Volume before distortion correction
motion_correct() {
echo "Motion correcting ${SUBJECT} task ${TASK}"
##First extract B0 reference volume for motion correction
echo "Grabbing first volume..."
FIRSTVOL=${SUBDIR}/topup/${SUBJECT}_task-${TASK}_B0_AP_PA_vol-0001.nii.gz
fslroi ${SUBDIR}/topup/${SUBJECT}_task-${TASK}_B0_AP_PA.nii.gz ${FIRSTVOL} 0 1
##Start motion correction
echo "Motion correcting ${ECHO1}"
mcflirt -in ${ECHO1} -reffile ${SUBDIR}/topup/${SUBJECT}_task-${TASK}_B0_AP_PA_vol-0001.nii.gz -mats -plots -out ${SUBDIR}/func/${SUBJECT}_task-${TASK}_echo-1_bold_mc.nii.gz
##Apply motion correction rotations to subsequent echos.
echo "Applying motion correction to ${ECHO2}..."
applyxfm4d ${ECHO2} ${FIRSTVOL} ${SUBDIR}/func/${SUBJECT}_task-${TASK}_echo-2_bold_mc.nii.gz ${SUBDIR}/func/${SUBJECT}_task-${TASK}_echo-1_bold_mc.nii.gz.mat -userprefix MAT_
echo "Applying motion correction to ${ECHO3}..."
applyxfm4d ${ECHO3} ${FIRSTVOL} ${SUBDIR}/func/${SUBJECT}_task-${TASK}_echo-3_bold_mc.nii.gz ${SUBDIR}/func/${SUBJECT}_task-${TASK}_echo-1_bold_mc.nii.gz.mat -userprefix MAT_
}
## Slice Timing
get_slice_timing() {
echo "Getting Slice Timing Params for ${SUBJECT} task ${TASK}"
./get_slice_timing.py -s ${SUBJECT} -b ${SOURCEDIR} -t ${TASK} -d ${WORKDIR}
}
## Distortion correction
run_topup() {
echo "Running topup for ${SUBJECT} task ${TASK}"
topup \
--imain=${SUBDIR}/topup/${SUBJECT}_task-${TASK}_B0_AP_PA.nii.gz \
--datain=${PARAM} \
--config=b02b0.cnf \
--out=${SUBDIR}/topup/${SUBJECT}_task-${TASK}_bold_topupresults \
--iout=${SUBDIR}/topup/${SUBJECT}_task-${TASK}_bold_hifi \
--fout=${SUBDIR}/topup/${SUBJECT}_task-${TASK}_bold_field
}
## Apply the distortion correction
run_apply_topup() {
echo "Applying topup to ${SUBDIR}/func/${SUBJECT}_task-${TASK}_echo-1_bold_mc.nii.gz..."
applytopup -i ${SUBDIR}/func/${SUBJECT}_task-${TASK}_echo-1_bold_mc.nii.gz --datain=${PARAM} --inindex=1 --method=jac --topup=${SUBDIR}/topup/${SUBJECT}_task-${TASK}_bold_topupresults --out=${SUBDIR}/func/${SUBJECT}_task-${TASK}_echo-1_bold_mc_dc.nii.gz
echo "Applying topup to ${SUBDIR}/func/${SUBJECT}_task-${TASK}_echo-2_bold_mc.nii.gz..."
applytopup -i ${SUBDIR}/func/${SUBJECT}_task-${TASK}_echo-2_bold_mc.nii.gz --datain=${PARAM} --inindex=1 --method=jac --topup=${SUBDIR}/topup/${SUBJECT}_task-${TASK}_bold_topupresults --out=${SUBDIR}/func/${SUBJECT}_task-${TASK}_echo-2_bold_mc_dc.nii.gz
echo "Applying topup to ${SUBDIR}/func/${SUBJECT}_task-${TASK}_echo-3_bold_mc.nii.gz..."
applytopup -i ${SUBDIR}/func/${SUBJECT}_task-${TASK}_echo-3_bold_mc.nii.gz --datain=${PARAM} --inindex=1 --method=jac --topup=${SUBDIR}/topup/${SUBJECT}_task-${TASK}_bold_topupresults --out=${SUBDIR}/func/${SUBJECT}_task-${TASK}_echo-3_bold_mc_dc.nii.gz
}
mkdirs
AP_PA_vols
merge_files
motion_correct
get_slice_timing
run_topup
run_apply_topup
Is there any update on this bug?
Hi!
I'm also trying out the new 21.0 version on my data, which before ran neatly with 20.2.3.
I have PA AP scans, and the sourcedata folder tree is something along these lines:
sourcedata
├── dataset_description.json
├── sub-001
│ └── ses-1
│ ├── anat
│ │ ├── sub-001_ses-1_run-1_T1w.json
│ │ ├── sub-001_ses-1_run-1_T1w.nii.gz
│ │ ├── sub-001_ses-1_run-1_T2w.json
│ │ └── sub-001_ses-1_run-1_T2w.nii.gz
│ │
│ ├── beh
│ ├── fmap
│ │ ├── sub-001_ses-1_run-1_epi.json
│ │ ├── sub-001_ses-1_run-1_epi.nii.gz
│ │ ├── sub-001_ses-1_run-2_epi.json
│ │ ├── sub-001_ses-1_run-2_epi.nii.gz
│ │ ├── sub-001_ses-1_run-3_epi.json
│ │ ├── sub-001_ses-1_run-3_epi.nii.gz
│ │ ├── sub-001_ses-1_run-4_epi.json
│ │ └── sub-001_ses-1_run-4_epi.nii.gz
│ └── func
│ ├── sub-001_ses-1_task-FA_acq-nordic_run-1_bold.json
│ ├── sub-001_ses-1_task-FA_acq-nordic_run-1_bold.nii.gz
│ ├── sub-001_ses-1_task-FA_acq-nordic_run-2_bold.json
│ ├── sub-001_ses-1_task-FA_acq-nordic_run-2_bold.nii.gz
│ ├── sub-001_ses-1_task-FA_acq-standard_run-1_bold.json
│ ├── sub-001_ses-1_task-FA_acq-standard_run-1_bold.nii.gz
│ ├── sub-001_ses-1_task-FA_acq-standard_run-2_bold.json
│ ├── sub-001_ses-1_task-FA_acq-standard_run-2_bold.nii.gz
│ ├── sub-001_ses-1_task-pRF_acq-nordic_run-1_bold.json
│ ├── sub-001_ses-1_task-pRF_acq-nordic_run-1_bold.nii.gz
│ ├── sub-001_ses-1_task-pRF_acq-nordic_run-2_bold.json
│ ├── sub-001_ses-1_task-pRF_acq-nordic_run-2_bold.nii.gz
│ ├── sub-001_ses-1_task-pRF_acq-standard_run-1_bold.json
│ ├── sub-001_ses-1_task-pRF_acq-standard_run-1_bold.nii.gz
│ ├── sub-001_ses-1_task-pRF_acq-standard_run-2_bold.json
│ └── sub-001_ses-1_task-pRF_acq-standard_run-2_bold.nii.gz
├── sub-002
│ └── ses-1
│ ├── anat
.
.
.
So one fieldmap file will be used for several bold runs. For example:
{...,
"IntendedFor": ["ses-1/func/sub-001_ses-1_task-FA_acq-standard_run-1_bold.nii.gz",
"ses-1/func/sub-001_ses-1_task-pRF_acq-standard_run-1_bold.nii.gz",
"ses-1/func/sub-001_ses-1_task-FA_acq-nordic_run-1_bold.nii.gz",
"ses-1/func/sub-001_ses-1_task-pRF_acq-nordic_run-1_bold.nii.gz"],
"B0FieldIdentifier": "pepolar_fmap1" }
- bold.json (for the abovementioned files):
{..., "B0FieldSource": "pepolar_fmap1" }
However when I try to run it the following error appears:
220107-16:48:33,71 nipype.workflow IMPORTANT: Building fMRIPrep's workflow:
Is there something I'm not setting correctly? Thanks!
Hi, thanks for following up! We had a call about this this morning, and are going to do two things:
1) Restore the old interpretation of IntendedFor
to make sure that users can get that behavior, if they desire. (Long-term, probably 21.0.1 or 21.0.2.)
2) Clarify the metadata necessary to achieve what you actually want to achieve. (In this post)
sub-01/
func/
sub-01_task-rest_dir-PA_bold.json
sub-01_task-rest_dir-PA_bold.nii.gz
fmap/
sub-01_dir-AP_epi.json
sub-01_dir-AP_epi.nii.gz
sub-01_dir-PA_epi.json
sub-01_dir-PA_epi.nii.gz
In this case, both sub-01_dir-AP_epi.json
and sub-01_dir-PA_epi.json
would include something like
"B0FieldIdentifier": "pepolar",
and sub-01_task-rest_dir-PA_bold.json
would include
"B0FieldSource": "pepolar",
sub-01/
func/
sub-01_task-rest_dir-PA_bold.json
sub-01_task-rest_dir-PA_bold.nii.gz
fmap/
sub-01_dir-AP_epi.json
sub-01_dir-AP_epi.nii.gz
To achieve the expected result where the first few volumes of BOLD are combined with the EPI to estimate the fieldmap, then we need to add B0FieldIdentifier
to both sub-01_dir-AP_epi.json
and sub-01_task-rest_dir-PA_bold.json
:
"B0FieldIdentifier": "pepolar",
so sub-01_task-rest_dir-PA_bold.json
would look include
"B0FieldIdentifier": "pepolar",
"B0FieldSource": "pepolar",
sub-01/
func/
sub-01_task-rest_dir-PA_run-1_bold.json
sub-01_task-rest_dir-PA_run-1_bold.nii.gz
sub-01_task-rest_dir-PA_run-2_bold.json
sub-01_task-rest_dir-PA_run-2_bold.nii.gz
fmap/
sub-01_dir-AP_epi.json
sub-01_dir-AP_epi.nii.gz
Because we're independently correcting, we need to use multiple identifiers for sub-01_dir-AP_epi.json
:
"B0FieldIdentifier": ["pepolar1", "pepolar2"],
And then each BOLD file would include itself as part of one of these identifiers:
"B0FieldIdentifier": "pepolar1",
"B0FieldSource": "pepolar1",
"B0FieldIdentifier": "pepolar2",
"B0FieldSource": "pepolar2",
sub-01/
func/
sub-01_task-rest_dir-PA_run-1_bold.json
sub-01_task-rest_dir-PA_run-1_bold.nii.gz
sub-01_task-rest_dir-PA_run-2_bold.json
sub-01_task-rest_dir-PA_run-2_bold.nii.gz
fmap/
sub-01_dir-AP_epi.json
sub-01_dir-AP_epi.nii.gz
In this case, we can use the first volumes of all PA BOLD runs to find a single fieldmap that we will use to correct all BOLD runs. In this case, we would use a single identifier for sub-01_dir-AP_epi.json
:
"B0FieldIdentifier": "pepolar",
And then each BOLD file would include itself as part of this identifier and be corrected by that identifier
"B0FieldIdentifier": "pepolar",
"B0FieldSource": "pepolar",
Note that these might produce errors, we haven't really tested these cases yet. But we've agreed (please check the above @oesteban) that we should be supporting these cases.
Hi @effigies ,
Thanks for the reply and for all the work you guys put in maintaining and upgrading fmriprep! :)
I understand that restoring the IntendedFor
approach will take a while longer, so for now I was trying to find a way of running fmriprep on my data with the B0FieldIdentifier
/B0FieldSource
pair. From what I can gather, my case should be similar to "Case 3" described above, where I have one fieldmap for multiple bold files.
However, when trying to run it I get the message:
Building fMRIPrep's workflow:
* BIDS dataset path: /project/projects_verissimo/FAM/PILOT_DATA/sourcedata.
* Participant list: ['001'].
* Run identifier: 20220110-095924_47b6557b-0bb5-46f8-a27d-26ea3f38d486.
* Output spaces: T1w fsnative fsaverage:den-164k MNI152NLin2009cAsym:res-native.
* Pre-run FreeSurfer's SUBJECTS_DIR: /project/projects_verissimo/FAM/PILOT_DATA/derivatives/freesurfer.
Process Process-2:
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/opt/conda/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/opt/conda/lib/python3.8/site-packages/fmriprep/cli/workflow.py", line 118, in build_workflow
retval["workflow"] = init_fmriprep_wf()
File "/opt/conda/lib/python3.8/site-packages/fmriprep/workflows/base.py", line 85, in init_fmriprep_wf
single_subject_wf = init_single_subject_wf(subject_id)
File "/opt/conda/lib/python3.8/site-packages/fmriprep/workflows/base.py", line 324, in init_single_subject_wf
fmap_estimators = find_estimators(
File "/opt/conda/lib/python3.8/site-packages/sdcflows/utils/wrangler.py", line 263, in find_estimators
b0_ids = layout.get_B0FieldIdentifiers(**base_entities)
File "/opt/conda/lib/python3.8/site-packages/bids/layout/layout.py", line 670, in get
results = list({
File "/opt/conda/lib/python3.8/site-packages/bids/layout/layout.py", line 671, in <setcomp>
ents[target] for ents in ent_iter if target in ents
TypeError: unhashable type: 'list'
This, is because, similar to the example, I was setting B0FieldIdentifier
as a list of strings, and this is only valid for B0FieldSource
(as stated in https://bids-specification.readthedocs.io/en/latest/04-modality-specific-files/01-magnetic-resonance-imaging-data.html#echo-planar-imaging-and-b0-mapping).
When I try without the list, i.e.:
{...,
"IntendedFor": ["ses-1/func/sub-001_ses-1_task-FA_acq-standard_run-1_bold.nii.gz",
"ses-1/func/sub-001_ses-1_task-pRF_acq-standard_run-1_bold.nii.gz",
"ses-1/func/sub-001_ses-1_task-FA_acq-nordic_run-1_bold.nii.gz",
"ses-1/func/sub-001_ses-1_task-pRF_acq-nordic_run-1_bold.nii.gz"],
"B0FieldIdentifier": "pepolar_fmap1"
}
{...,
"B0FieldIdentifier": "pepolar_fmap1"
"B0FieldSource": "pepolar_fmap1"
}
I get this message:
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/opt/conda/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/opt/conda/lib/python3.8/site-packages/fmriprep/cli/workflow.py", line 118, in build_workflow
retval["workflow"] = init_fmriprep_wf()
File "/opt/conda/lib/python3.8/site-packages/fmriprep/workflows/base.py", line 85, in init_fmriprep_wf
single_subject_wf = init_single_subject_wf(subject_id)
File "/opt/conda/lib/python3.8/site-packages/fmriprep/workflows/base.py", line 324, in init_single_subject_wf
fmap_estimators = find_estimators(
File "/opt/conda/lib/python3.8/site-packages/sdcflows/utils/wrangler.py", line 379, in find_estimators
fm.FieldmapEstimation(
File "<attrs generated init sdcflows.fieldmaps.FieldmapEstimation>", line 7, in __init__
self.__attrs_post_init__()
File "/opt/conda/lib/python3.8/site-packages/sdcflows/fieldmaps.py", line 417, in __attrs_post_init__
raise ValueError(
ValueError: Multiple ``B0FieldIdentifier`` set: <pepolar_fmap3, pepolar_fmap4, pepolar_fmap1, pepolar_fmap2>
I tried a few more different ways of setting the B0FieldIdentifier
/B0FieldSource
e pair, to no avail...
If someone knows a workaround, that would be great!
If not, I guess I'll wait for a new version of 21.
Sorry to hear that we don't support this even in the way I suggested yet... I'll be working on that.
One other thing you can consider doing in the meantime making copies of your files. For example, if you want to do case 3 from above, and you want to use the first five volumes of your BOLD runs to estimate the fieldmap, you can extract those volumes as PA fieldmaps:
sub-01/
func/
sub-01_task-rest_dir-PA_run-1_bold.json
sub-01_task-rest_dir-PA_run-1_bold.nii.gz
sub-01_task-rest_dir-PA_run-2_bold.json
sub-01_task-rest_dir-PA_run-2_bold.nii.gz
fmap/
sub-01_dir-AP_epi.json
sub-01_dir-AP_epi.nii.gz
$ fslroi sub-01/func/sub-01_task-rest_dir-PA_run-1_bold.nii.gz sub-01/fmap/sub-01_dir-PA_run-1_epi.json 0 5
$ fslroi sub-01/func/sub-01_task-rest_dir-PA_run-2_bold.nii.gz sub-01/fmap/sub-01_dir-PA_run-2_epi.json 0 5
$ cp sub-01/sub-01_dir-AP_epi.nii.gz sub-01/sub-01_dir-AP_run-1_epi.nii.gz
$ mv sub-01/sub-01_dir-AP_epi.nii.gz sub-01/sub-01_dir-AP_run-2_epi.nii.gz
You'll need to copy JSON files as well...
sub-01/
func/
sub-01_task-rest_dir-PA_run-1_bold.json
sub-01_task-rest_dir-PA_run-1_bold.nii.gz
sub-01_task-rest_dir-PA_run-2_bold.json
sub-01_task-rest_dir-PA_run-2_bold.nii.gz
fmap/
sub-01_dir-AP_run-1_epi.json
sub-01_dir-AP_run-1_epi.nii.gz
sub-01_dir-AP_run-2_epi.json
sub-01_dir-AP_run-2_epi.nii.gz
sub-01_dir-PA_run-1_epi.json
sub-01_dir-PA_run-1_epi.nii.gz
sub-01_dir-PA_run-2_epi.json
sub-01_dir-PA_run-2_epi.nii.gz
And then you can update the metadata:
sub-01_task-rest_dir-PA_run-1_bold.json
:
"B0FieldSource": "pepolar1",
sub-01_task-rest_dir-PA_run-2_bold.json
:
"B0FieldSource": "pepolar2",
sub-01_dir-AP_run-1_epi.json
+ sub-01_dir-AP_run-1_epi.json
:
"B0FieldIdentifier": "pepolar1",
sub-01_dir-AP_run-2_epi.json
+ sub-01_dir-AP_run-2_epi.json
:
"B0FieldIdentifier": "pepolar2",
You can adjust to fit your use case. For example, you could create a single PA fieldmap file using volumes from one or both BOLD runs and using the same B0FieldIdentifier/Source
for all files.
Dear @effigies,
If I may be so blunt; do you have any idea how long it may take before the intendedfor function is restored? Do I have to take into account weeks, or multiple months? Any indication would help me with planning projects:) Thank you!
Kind regards, Joey Verdijk
Apologies for being a bit opaque. Not sure if I hadn't really assessed it when I wrote my last comment or just failed to communicate.
The fix is currently under review, so I would expect a release before the end of the month. It would be nice to say this week, but everybody has other projects they're working on as well, so everything takes a bit longer than my first estimate.
Hi all, the fix for this will be in 21.0.1, estimated release next week.
One thing to note is that the number of volumes used from EPI or BOLD sources to estimate the B0 field will default to the first 5. Making this adaptive is beyond the scope of something we can fix in 21.0.x. To change from this default, use --topup-max-vols
.
I will leave this discussion open until release.
Oh sweet, thanks!
Amazing!
21.0.1 is released.
Sorry to hear that we don't support this even in the way I suggested yet... I'll be working on that.
One other thing you can consider doing in the meantime making copies of your files. For example, if you want to do case 3 from above, and you want to use the first five volumes of your BOLD runs to estimate the fieldmap, you can extract those volumes as PA fieldmaps:
sub-01/ func/ sub-01_task-rest_dir-PA_run-1_bold.json sub-01_task-rest_dir-PA_run-1_bold.nii.gz sub-01_task-rest_dir-PA_run-2_bold.json sub-01_task-rest_dir-PA_run-2_bold.nii.gz fmap/ sub-01_dir-AP_epi.json sub-01_dir-AP_epi.nii.gz
$ fslroi sub-01/func/sub-01_task-rest_dir-PA_run-1_bold.nii.gz sub-01/fmap/sub-01_dir-PA_run-1_epi.json 0 5 $ fslroi sub-01/func/sub-01_task-rest_dir-PA_run-2_bold.nii.gz sub-01/fmap/sub-01_dir-PA_run-2_epi.json 0 5 $ cp sub-01/sub-01_dir-AP_epi.nii.gz sub-01/sub-01_dir-AP_run-1_epi.nii.gz $ mv sub-01/sub-01_dir-AP_epi.nii.gz sub-01/sub-01_dir-AP_run-2_epi.nii.gz
You'll need to copy JSON files as well...
sub-01/ func/ sub-01_task-rest_dir-PA_run-1_bold.json sub-01_task-rest_dir-PA_run-1_bold.nii.gz sub-01_task-rest_dir-PA_run-2_bold.json sub-01_task-rest_dir-PA_run-2_bold.nii.gz fmap/ sub-01_dir-AP_run-1_epi.json sub-01_dir-AP_run-1_epi.nii.gz sub-01_dir-AP_run-2_epi.json sub-01_dir-AP_run-2_epi.nii.gz sub-01_dir-PA_run-1_epi.json sub-01_dir-PA_run-1_epi.nii.gz sub-01_dir-PA_run-2_epi.json sub-01_dir-PA_run-2_epi.nii.gz
And then you can update the metadata:
sub-01_task-rest_dir-PA_run-1_bold.json
:"B0FieldSource": "pepolar1",
sub-01_task-rest_dir-PA_run-2_bold.json
:"B0FieldSource": "pepolar2",
sub-01_dir-AP_run-1_epi.json
+sub-01_dir-AP_run-1_epi.json
:"B0FieldIdentifier": "pepolar1",
sub-01_dir-AP_run-2_epi.json
+sub-01_dir-AP_run-2_epi.json
:"B0FieldIdentifier": "pepolar2",
You can adjust to fit your use case. For example, you could create a single PA fieldmap file using volumes from one or both BOLD runs and using the same
B0FieldIdentifier/Source
for all files.
Hi effigies,
I am working on a multi-echo dataset with PEPOLAR fieldmaps with only one opposite direction, i.e.,
./fmap/sub-1_task-verb_dir-AP_echo-1_epi.nii
./fmap/sub-1_task-verb_dir-AP_echo-2_epi.nii
./fmap/sub-1_task-verb_dir-AP_echo-3_epi.nii
as well as 3 ME bold datasets in
./func/sub-1_task-verb_dir-PA_echo-1_bold.nii.gz
./func/sub-1_task-verb_dir-PA_echo-2_bold.nii.gz
./func/sub-1_task-verb_dir-PA_echo-3_bold.nii.gz
I have followed the guidance and set up all the json files (B0FieldIdentifier, B0FieldSource, IntendedFor), the latest fMRIprep v22.0.1 still threw the same error as the OP showing "ValueError: Insufficient sources to estimate a fieldmap." I then followed the instruction in your post by copying the BOLD files over (first few TRs) to the fmap folder and then fMRIprep was fine. My understanding is that the bug was addressed in an earlier release but somehow it comes back in v22.0.1? This is just a minor thing but I just wanted to point it out. Thanks for your great work!
What happened?
Dear fmriprep team,
Thanks for the update! I am looking forward to testing the changes! (also see https://twitter.com/DalenbergJr/status/1471392667481300994).
I have PA AP scans but PEPOLAR is not working in v21 while it ran in 20.2.5.
The console output tells me: "Sophisticated PEPOLAR schemes are unsupported" but I do not understand why it's "sophisticated". It's a regular PA vs AP setup.
I am looking forward to use this implementation because the previous PEPOLAR implementation failed on half my multi-echo data set and I had to use syn-sdc (and sometimes even had to remove phase encoding direction info) to get the distortion correction right.
-Jelle
What command did you use?
What version of fMRIPrep are you running?
21.0.0
How are you running fMRIPrep?
Docker
Is your data BIDS valid?
Yes
Are you reusing any previously computed results?
No
Please copy and paste any relevant log output.
Additional information / screenshots
If you need more info, let me know!