ANTsX / ANTs

Advanced Normalization Tools (ANTs)
Apache License 2.0
1.2k stars 381 forks source link

Direct functional -> standard transform gives better results than combining functional -> highres and highres -> standard #1431

Closed lbailey25 closed 2 years ago

lbailey25 commented 2 years ago

Dear users,

I am trying to register functional MRI data to standard space. I know that one would typically register functional data to the individual's highres T1 image, separately register the T1 to standard space, and then combine the two transforms. The code I used for this procedure looks like this:

# Compute linear transform for func-> highres
 antsRegistrationSyNQuick.sh -d 3 -e 1 -t a -f highres_image.nii.gz -m functional_image.nii.gz -o func_to_highres_ 

# Compute nonlinear transform for highres -> standard
antsRegistrationSyNQuick.sh -d 3 -t s -f standard_image.nii.gz -m highres_image.nii.gz -o highres_to_standard_

# Apply the two transforms 
antsApplyTransforms -d 3 -i functional_image.nii.gz -r standard_image.nii.gz  -o func_in_standard_space.nii.gz \
                     -t highres_to_standard_1Warp.nii.gz                \
                     -t highres_to_standard_0GenericAffine.mat     \
                     -t func_to_highres_0GenericAffine.mat  

However, I have found that in the case of my data, this conventional approach actually gives worse results than simply registering the functional data directly to standard space:

antsRegistrationSyNQuick.sh -d 3 -e 1 -t s -f standard_image.nii.gz -m functional_image.nii.gz -o func_to_standard_direct

antsApplyTransforms -d 3 -i functional_image.nii.gz -r standard_image.nii.gz -o func_in_standard_space.nii.gz \
                     -t func_to_standard_direct_1Warp.nii.gz                \
                     -t func_to_standard_direct_0GenericAffine.mat  

The two attached images show transformed functional data overlaid on the template MNI brain. The blue overlay shows the data transformed using the conventional 2-step procedure, the red overlay shows the data transformed directly to the template.

2step

direct

The direct transform (red overlay) appears to fit the template much better, albeit with some error.

My question is this: can I reasonably "trust" the results from the direct transform, given that (as a rule of thumb) the direct transform should be less robust/reliable?

(I should mention that my functional data is a little unusual—it is output from an RSA searchlight analysis, meaning that the values at each voxel are correlation values ranging from approximately 0.1 to -0.1. This might explain why the conventional transformation procedure does a poor job)

EDIT I will add that, although the images I provided are from a single representative subject, what I described is consistent across a sample of 30 subjects—the direct transform always provides a better fit than the alternative method.

cookpa commented 2 years ago

Can you share example data?

Overall, in my opinion the best way to register BOLD data to a template is to rigidly motion correct to a BOLD reference image (eg, the average image, or an sbref if available), and use a field map to correct distortions. Then rigidly align the reference image to T1w, and combine with the T1w to template warps to get the data into standard space. After doing this, you can apply the warps to other images like your correlation scores. I wouldn't register the correlations directly, partly because negative intensities in the images can cause registration problems (see #1429), and partly because I would not expect the anatomical correspondence to the T1w to be as good as the original BOLD reference image.

I don't think antsRegistrationSyNQuick.sh is the best tool for the intrasubject registration in general. I also don't think an affine transform is appropriate. With field maps, a rigid transform works well. Lacking field maps, things get more complicated but you can try to approximate susceptibility distortion, but it's not as good as a real field map.

lbailey25 commented 2 years ago

Hi @cookpa

Thanks for the swift response! I've attached some example data. "subject-001_searchlight_results.nii.gz" is the functional image that I am trying to transform to MNI space. I also included the skull-stripped brain and example COPE image from the same subject, in case those are helpful.

subject-001_searchlight_results.nii.gz example_func.nii.gz subject-001_struct_brain.nii.gz

Unfortunately we did not acquire field maps. Could you please tell me (or point me towards an appropriate resource) how I would approximate susceptibility distortion?

cookpa commented 2 years ago

Thanks for the data. The EPI image is low-res and lacks contrast, and is not brain-extracted. The T1w is mostly brain extracted but has some extra matter retained (like part of an eye), this will complicate things.

To align these, I would exploit the fact that the initial alignment is pretty good, do rigid alignment and then add a bit of deformation. You've got to be very careful when registering a brain-extracted to a non brain-extracted image - it might be necessary to brain mask the EPI image. Also, when the FOV is tight, it often helps to pad the fixed image to avoid boundary effects. You can do this with

   ImageMath 3 epiPadded.nii.gz PadImage example_func.nii.gz 20

I can give you a starting point, I did

antsRegistration -d 3 -t Rigid[ 0.1 ] -f 2x1 -s 1x0vox -m Mattes[ example_func.nii.gz , subject-001_struct_brain.nii.gz , 1, 32 ] -c [ 50x50, 1e-7, 10 ] -o [ t1wToEPI, t1wToEPIDeformed.nii.gz ] -v 1 -t BSplineSyN[ 0.1, 10, 0, 3 ] -g 0.01x1x0.01 -m Mattes[ example_func.nii.gz , subject-001_struct_brain.nii.gz , 1, 32 ] -f 1 -s 0vox -c 25

antsApplyTransforms -d 3 -i subject-001_struct_brain.nii.gz -t t1wToEPI0GenericAffine.mat -r example_func.nii.gz -o t1wToEPIRigidDeformed.nii.gz -v 1

I'm using the EPI image as the fixed space so I can restrict deformation to the direction of the distortion (-g), which I assume is anterior to posterior.

To help visualize the correction I apply the rigid transform only so I can view side by side with the deformed image. It might make more sense to warp the functional image to T1w (invert the warps, see the wiki and usage for antsApplyTransforms) for visualization.

lbailey25 commented 2 years ago

Woops, I forgot that example_func is not skull-stripped. Would it make more sense to use a skull-stripped COPE instead of example_func?

Thanks for the code! Unfortunately there seems to be a problem with the antsRegistration call - i ran it from a folder containing only subject-001_struct_brain.nii.gz and example_func.nii.gz, and I got this error:

8443 Segmentation fault (core dumped) antsRegistration -d 3 -t Rigid[ 0.1 ] -f 2x1 -s 1x0vox -m Mattes[ example_func.nii.gz , subject-001_struct_brain.nii.gz , 1, 32 ] -c [ 50x50, 1e-7, 10 ] -o [ t1wToEPI, t1wToEPIDeformed.nii.gz ] -v 1 -t BSplineSyN[ 0.1, 10, 0, 3 ] -g 0.01x1x0.01 -m Mattes[ example_func.nii.gz , subject-001_struct_brain.nii.gz , 1, 32 ] -f 1 -s 0vox -c 25

Do you know what might be causing this?

cookpa commented 2 years ago

If you have a skull-stripped image you can use it to make a brain mask. For the actual registration, I'd use whatever has the most anatomical features that can be aligned with the T1w.

Not sure why your code doesn't run - if you have compiled recently you might need to add --float 0 (fixed in v2.4.2)

lbailey25 commented 2 years ago

@cookpa, I really appreciate your help/advice on this matter.

Using --float 0 in the antsRegistration call fixed the problem, thanks for the tip.

I aligned the t1w image to the (padded) non-skull-stripped EPI as follows:

antsRegistration -d 3 \
                  -t Rigid[ 0.1 ]  \
                  -f 2x1 -s 1x0vox  \
                  -m Mattes[ padded_example_func.nii.gz , $t1w.nii.gz , 1, 32 ] \
                  -c [ 50x50, 1e-7, 10 ] \
                  -o [ t1wToEPI, t1wToEPIDeformed.nii.gz ] \
                  -v 1 \
                  -t BSplineSyN[ 0.1, 10, 0, 3 ]  \
                  -g 0.01x1x0.01 \
                  -m Mattes[ padded_example_func.nii.gz , t1w.nii.gz , 1, 32 ] \ 
                  -f 1 -s 0vox \
                  -c 25 \
                  --float 0 

[Note: using the skull-stripped image as a mask did not seem to make much difference; if anything it made the transform slightly worse.]

Next, I aligned the standard template to t1w:

antsRegistration -d 3 \
                  -t Rigid[ 0.1 ] \
                  -f 2x1 -s 1x0vox \
                  -m Mattes[ t1w.nii.gz , standard.nii.gz , 1, 32 ] \
                  -c [ 50x50, 1e-7, 10 ] \
                  -o [MNI2t1w, MNI2t1wRigid.nii.gz ] \
                  -v 1  \
                  --float 0 

And then I combined the inverse transforms and applied to the correlation map:

antsApplyTransforms -d 3 \
                      -i correlation_map.nii.gz \
                      -r standard.nii.gz \
                      -t [MNI2t1w0GenericAffine.mat, 1]  \
                      -t [t1wToEPI0GenericAffine.mat, 1]  \
                      -t t1wToEPI1InverseWarp.nii.gz  \
                      -o correlations_to_standard.nii.gz  \
                      -v 1

I ran this on 3 subjects and results look fairly consistent. The images below show, to my eye, the subject with the best performance:

Screenshot from 2022-10-20 12-36-18

And the subject with the worst performance (highest amount of missing brain in the transformed image):

subject-001

Do these transforms look reasonable / trustworthy? Moreover, are they more trustworthy than the direct-to-MNI image that I originally posted?

EDIT: fixed typos

cookpa commented 2 years ago

There's only so much we can learn from visual inspection of low-resolution maps, especially after registration and resampling in standard space. Properly validating registration is hard, especially if you have other hypotheses you want to test simultaneously in the same data, without circularity.

As far as visual inspection goes, one reason I like to use the T1w as an intermediate step is that the intra-subject BOLD to T1w deformation is small and easier to look at, and the T1w to (standard T1w) deformation has more information and contrast available. So if both of those are good, then I trust that the combination of the two is also good.

Also, you are making an apples to oranges comparison between registrations. In your original post, you did a SyN registration of the BOLD to the MNI T1w image. But above, you are using a rigid transform from T1w to MNI. With a SyN registration of the subject T1w to MNI, you should get a better result.

lbailey25 commented 2 years ago

You make a good point - that looking at the individual transforms is just as important as the final result of the composite transform (if I interpreted you correctly). With that in mind, I think that the T1w to EPI (rigid + SyN) looks pretty great. The image below shows the non-stripped example func (greyscale) superimposed with the transformed T1w (red):

highres_to_example_func

However I'm not so sure about the standard to T1w. The two images below show the T1w to standard (inverse of standard to T1w, just for visualization) transform using rigid + SyN:

Screenshot from 2022-10-24 10-26-37

And the same transform using affine instead of rigid: t1w_to_standard_affine

Earlier in this thread you mentioned that you didn't think affine was appropriate for the T1w, but to my eye it looks like a much better fit. What do you think of this? Am I interpreting it incorrectly?

cookpa commented 2 years ago

The transformation model should reflect your priors about the problem you're trying to solve.

For the EPI to the intra-session T1w, I avoid affine. A full affine contains rigid + scaling + shear. We're trying to solve motion (rigid) + epi distortion. The distortion is complicated, maybe roughly approximated by a scale/shear along the phase encode direction - however, the affine transform in ants is not constrained to scale / shear in that direction. I've just found empirically that an affine step doesn't improve things and just adds noise.

For registering to a standard space, affine is completely appropriate (use -t s), because populations of brains do vary in global scale as well as position. Solving that component first gives the best initialization for deformable registration.

lbailey25 commented 2 years ago

Thanks for the speedy reply! Ah, I misunderstood - I thought you were saying that one should avoid affine for the registration to standard space.

Anyway, this has really cleared things up (and you've 100% convinced me to use the intermediate EPI to T1w step). Thanks again for all your help!