PennLINC / qsiprep

Preprocessing of diffusion MRI
http://qsiprep.readthedocs.io
BSD 3-Clause "New" or "Revised" License
138 stars 55 forks source link

questions about QSIprep 12.1 flow description #446

Closed WilliamFCB closed 8 months ago

WilliamFCB commented 2 years ago

Hi Matt, For an EU project, we used QSIprep 12 with a FSL 6.0.4 patch to allow running movement by susceptibility correction with eddy_cuda9.1

I am trying to reconstruct all processing steps, but have some questions that I hope you can shed some light on. I used the generated html, your article and the online documentation as sources. At the very end I post our QSIprep singularity call and the Eddy settings used.

Sorry for the long text and many thanks in advance! Cheers William

Questions:

  1. In the anatomical stream it is stated that "OASIS was used as target template". However, it is not clear what data/sample this template is based on?
  2. While brain tissues are segmented, I don't think that these were used for anything in our setup.. or are they?
  3. In the b=0 image-based intensity normalization, the last step in the denoising, was a specific target intensity used?
  4. The documentation suggests that "The head motion, eddy current and susceptibility distortion corrections are applied at the end of eddy, using Jacobian modulation " ( and cubic splines interpolation I assume?). However, online documentation suggests that raw DWI images are resampled in AC-PC MNI orientation by performing all transformations, including head motion correction, susceptibility distortion correction in a single shot using a Lanczos kernel.

Is the first interpolation after eddy/topup only used to allow generating the b0 reference? Anyway, are in our case one or two interpolations done?

############# QSIprep 12.1 preprocessing

Preprocessing was performed using QSIprep 12.1, which is based on Nipype 1.5.1 (Gorgolewski et al., 2011), research resource identification portal: RRID:SCR_002502. QSIprep uses meta data recorded in the Brain Imaging Data Structure (BIDS) and employs image processing tools from different soft packages e.g., ANTS 2.3.1 (Avants et al., 2008), MRtrix3 (Tournier et al., 2019), DSI Studio (Yeh & Tseng, 2011), DIPY (Garyfallidis et al., 2014) and FSL 6.0.4 (Andersson & Sotiropoulos, 2016; Jenkinson et al., 2012).

All spatial transformation operations in QSIPrep (excluding FSL TOPUP/eddy) are performed in ANTs 2.3.1.

The following processing steps were performed:

  1. Anatomical data processing

  2. Diffusion data processing a. Denoising b. Head motion, eddy current and distortion correction c. b=0 reference image creation (including an optional single-subject b=0 template) d. Rigid 6-parameter (rotations and translations) co-registration to the skull-stripped T1w image, and DWI image resampling and gradient rotation.

  3. Quality control indices

  4. Anatomical data processing

The T1-weighted (T1w) image was corrected for intensity non-uniformity (INU) using N4BiasFieldCorrection (Tustison et al. 2010, ANTs 2.3.1), and used as T1w-reference throughout the workflow. The T1w-reference was then skull-stripped using antsBrainExtraction.sh (ANTs 2.3.1), using OASIS as target template. Spatial normalization to the ICBM 152 Nonlinear Asymmetrical template version 2009c ((Fonov et al., 2009)) was performed through nonlinear registration with antsRegistration (ANTs 2.3.1), using brain-extracted versions of both T1w volume and template. FAST (FSL 6.0.4, (Zhang et al., 2001)) was used for brain tissue segmentation of cerebrospinal fluid (CSF), white-matter (WM) and gray-matter (GM) was performed on the skull-stripped T1w. The linear rigid 6-parameter part of the registration was used to bring the T1W image in MNI, anterior commissure -posterior commissure, orientation (i.e., no scaling, shearing or nonlinear deformations).

  1. Diffusion data processing a. Denoising consisted of four steps: i. MP-PCA (Marchenko-Pastur distribution – Principal Component Analysis) denoising, as implemented in MRtrix3’s dwidenoise (Veraart et al., 2016), using a 5-voxel window. ii. Gibbs unringing using Mrtrix3’s mrdegibbs (Kellner et al., 2016). iii. B1 field inhomogeneity correction using dwibiascorrect from Mrtrix3 with the N4 algorithm (Tustison et al., 2010). iv. b=0 image-based intensity normalization

b. Head motion, eddy current and susceptibility by movement distortion correction

QSIprep combines these steps using FSL(version 6.0.4) eddy_cuda9.1 (Andersson & Sotiropoulos, 2016; Andersson et al., 2016) and FSL TOPUP (Andersson et al., 2003), which are highly interdependent. Eddy_cuda allows to harness the power of graphics processing units.

FSL eddy_cuda was used for correcting head motion, Eddy currents as well as within-volume or “slice-to-volume” movements and movements by susceptibility interactions (Andersson et al., 2017; Andersson & Sotiropoulos, 2016). Eddy was configured with a q-space smoothing factor of 10, a total of 8 iterations, and 1000 voxels used to estimate hyperparameters. A quadratic first level model (and a linear second level model in case of CFIN) was used to characterize Eddy current-related spatial distortion. Field offset was attempted to be separated from subject movement. Eddy’s outlier replacement was run (Andersson et al., 2016). Data were grouped by slice, only including values from slices determined to contain at least 250 intracerebral voxels. Groups deviating by more than four standard deviations from the prediction had their data replaced with imputed values. TOPUP uses the opposite phase encoded images as input i.e., the anterior-posterior phase encoded DWI images and the posterior-anterior phase encoded images (Andersson et al., 2003). For the susceptibility distortion correction QSIPrep selected up to three b=0 images evenly spaced in time from each group of phase encoding directions. The fieldmaps were ultimately incorporated into the Eddy current and head motion correction interpolation.

The head motion, eddy current and susceptibility distortion corrections are applied at the end of eddy, using Jacobian modulation and cubic splines interpolation.

Derived affine movement parameters were used to calculate the mean framewise displacement (FD) using the implementation in Nipype (Gorgolewski et al., 2011), using definitions by Power et al. (Power et al., 2014) and was used as a confounder in the statistical analyses.

c. A b=0 reference image was created for co-registration to the T1w image in MNI orientation. The reference image was created by extracting the b=0 images from the DWI image series after movement, eddy current and susceptibility correction. Extracted b0 volumes were combined using a normalized average as implemented in ANTs 2.3.1 and underwent histogram equalization as implemented in DIPY.

d. Rigid co-registration to the skull stripped T1w image in MNI orientation and image resampling and diffusion gradient rotation.

Raw DWI images were resampled in AC-PC MNI orientation and native resolution, by performing all transformations, including head motion correction, susceptibility distortion correction in a single shot using a Lanczos kernel. Finally, QSIprep produced FSL-style bval/bvec pairs and a MRTrix .b gradient table with the rotations from the linear transformation applied.

  1. Quality control indices and quality control procedure

Mean Framewise displacement Number of raw bad slices (DTI studio) Carpet plots Visual inspection

Diffusion Tensor estimation

QSIprep processed DWI images were analyzed with FSL dtifit, using weighted least squares fitting, to reconstruct the diffusion tensor (Jones & Basser, 2004) and generate fractional anisotropy (FA), mean diffusivity (MD), axial diffusivity (AD = λ1) and radial diffusivity (RD = (λ2 + λ3) / 2) images.

References

Andersson, J. L., Skare, S., & Ashburner, J. (2003). How to correct susceptibility distortions in spin-echo echo-planar images: application to diffusion tensor imaging. Neuroimage, 20(2), 870-888. https://doi.org/10.1016/S1053-8119(03)00336-7 Andersson, J. L. R., Graham, M. S., Drobnjak, I., Zhang, H., Filippini, N., & Bastiani, M. (2017). Towards a comprehensive framework for movement and distortion correction of diffusion MR images: Within volume movement. Neuroimage, 152, 450-466. https://doi.org/10.1016/j.neuroimage.2017.02.085 Andersson, J. L. R., Graham, M. S., Zsoldos, E., & Sotiropoulos, S. N. (2016). Incorporating outlier detection and replacement into a non-parametric framework for movement and distortion correction of diffusion MR images. Neuroimage, 141, 556-572. https://doi.org/10.1016/j.neuroimage.2016.06.058 Andersson, J. L. R., & Sotiropoulos, S. N. (2016). An integrated approach to correction for off-resonance effects and subject movement in diffusion MR imaging. Neuroimage, 125, 1063-1078. https://doi.org/10.1016/j.neuroimage.2015.10.019 Avants, B. B., Epstein, C. L., Grossman, M., & Gee, J. C. (2008). Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain. Med Image Anal, 12(1), 26-41. https://doi.org/10.1016/j.media.2007.06.004 Fonov, V. S., Evans, A. C., McKinstry, R. C., Almli, C. R., & Collins, D. L. (2009). Unbiased nonlinear average age-appropriate brain templates from birth to adulthood. In 47, Supplement 1 (p. S102). Garyfallidis, E., Brett, M., Amirbekian, B., Rokem, A., van der Walt, S., Descoteaux, M., & Nimmo-Smith, I. (2014). Dipy, a library for the analysis of diffusion MRI data. Front Neuroinform, 8, 8. https://doi.org/10.3389/fninf.2014.00008 Gorgolewski, K., Burns, C. D., Madison, C., Clark, D., Halchenko, Y. O., Waskom, M. L., & Ghosh, S. S. (2011). Nipype: a flexible, lightweight and extensible neuroimaging data processing framework in python. Front Neuroinform, 5, 13. https://doi.org/10.3389/fninf.2011.00013 Jenkinson, M., Beckmann, C. F., Behrens, T. E., Woolrich, M. W., & Smith, S. M. (2012). FSL. Neuroimage, 62(2), 782-790. https://doi.org/10.1016/j.neuroimage.2011.09.015 Jones, D. K., & Basser, P. J. (2004). Squashing peanuts and smashing pumpkins: How noise distorts diffusion-weighted MR data. Magnetic Resonance in Medicine, 6037264548035001142related:NnuvoB6syFMJ). https://doi.org/10.1002/mrm.20283 Kellner, E., Dhital, B., Kiselev, V. G., & Reisert, M. (2016). Gibbs-ringing artifact removal based on local subvoxel-shifts. Magn Reson Med, 76(5), 1574-1581. https://doi.org/10.1002/mrm.26054 Power, J. D., Mitra, A., Laumann, T. O., Snyder, A. Z., Schlaggar, B. L., & Petersen, S. E. (2014). Methods to detect, characterize, and remove motion artifact in resting state fMRI. Neuroimage, 84, 320-341. https://doi.org/10.1016/j.neuroimage.2013.08.048 Tournier, J. D., Smith, R., Raffelt, D., Tabbara, R., Dhollander, T., Pietsch, M., Christiaens, D., Jeurissen, B., Yeh, C. H., & Connelly, A. (2019). MRtrix3: A fast, flexible and open software framework for medical image processing and visualisation. Neuroimage, 202, 116137. https://doi.org/10.1016/j.neuroimage.2019.116137 Tustison, N. J., Avants, B. B., Cook, P. A., Zheng, Y., Egan, A., Yushkevich, P. A., & Gee, J. C. (2010). N4ITK: improved N3 bias correction. IEEE Trans Med Imaging, 29(6), 1310-1320. https://doi.org/10.1109/TMI.2010.2046908 Veraart, J., Novikov, D. S., Christiaens, D., Ades-Aron, B., Sijbers, J., & Fieremans, E. (2016). Denoising of diffusion MRI using random matrix theory. Neuroimage, 142, 394-406. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5159209 Yeh, F. C., & Tseng, W. Y. (2011). NTU-90: a high angular resolution brain atlas constructed by q-space diffeomorphic reconstruction. Neuroimage, 58(1), 91-99. https://doi.org/10.1016/j.neuroimage.2011.06.021 Zhang, Y., Brady, M., & Smith, S. (2001). Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm. IEEE Trans Med Imaging, 20(1), 45-57. https://doi.org/10.1109/42.906424

########### qsiprep call singularity run --cleanenv --contain \ -B ${TOOLS_DIR} \ -B ${BIDS_DIR_TMP}/:/data_in \ -B ${BIDS_DIR}/:${BIDS_DIR} \ -B ${SCRIPTS_DIR}/:${SCRIPTS_DIR} \ -B ${BIDS_DIR}/derivatives:/data_out \ -B ${WORK_DIR}:/work \ -B /tmp:/tmp \ qsiprep-0.12.1_fsl-6.0.4_eddy-patched.sif \ /data_in \ /data_out \ participant \ -w /work \ --participant-label ${sub}${ses/-/} \ --skip_bids_validation \ --use-plugin ${SCRIPTS_DIR}/tmp/qsiprepplugin/${sub}${ses}_qsiprep_plugin.yml \ --fs-license-file ${TOOLS_DIR}/freesurfer/freesurfer.6.0.0/license.txt \ --unringing-method mrdegibbs \ --dwi-denoise-window 5 \ --output-resolution \ --hmc-model eddy \ --eddy_config ${SCRIPTS_DIR}/tmp/eddyparams/${sub}${ses}_eddy_params.json \ --output-space T1w \ --nthreads ${SLURM_CPUS_PER_TASK} \

eddy settings

Eddy parameters: "flm": "quadratic", "slm": "none", "fep": false, "interp": "spline", "nvoxhp": 1000, "fudge_factor": 10, "dont_sep_offs_move": false, "dont_peas": false, "niter": 8, "method": "jac", "repol": true, "num_threads": 3, "is_shelled": false, "use_cuda": true, "cnr_maps": true, "residuals": false, "output_type": "NIFTI_GZ", "args": "--very_verbose --fwhm=10,6,4,2,0,0,0,0 --ol_type=both --mporder=8 --s2v_niter=8 --slspec= --estimate_move_by_susceptibility"

mattcieslak commented 2 years ago

Hi William,

Sorry for the late response!

In the anatomical stream it is stated that "OASIS was used as target template". However, it is not clear what data/sample this template is based on?

This is the OASIS template included in fmriprep version 1.2.6. It's currently in templateflow too.

While brain tissues are segmented, I don't think that these were used for anything in our setup.. or are they? In the preprocessing stream, no. They can be used by the reconstruction workflows, but are otherwise not used.

In the b=0 image-based intensity normalization, the last step in the denoising, was a specific target intensity used? The mean intensity of the nearest b=0 image is used to weight the adjacent b>0 images. Here is the function that calculates it: https://github.com/PennLINC/qsiprep/blob/7603d66c12039278905c467b5424a0f998bf0ca1/qsiprep/interfaces/dwi_merge.py#L458

The documentation suggests that "The head motion, eddy current and susceptibility distortion corrections are applied at the end of eddy, using Jacobian modulation " ( and cubic splines interpolation I assume?). However, online documentation suggests that raw DWI images are resampled in AC-PC MNI orientation by performing all transformations, including head motion correction, susceptibility distortion correction in a single shot using a Lanczos kernel.

In the case of your eddy config, it is using cubic splines. The output space that qsiprep calls "T1w" is just a reorientation of the T1w to AC-PC alignment (there is no resizing or warping). The motion, eddy and susceptibility distortion correcting transforms are applied by eddy, then a second interpolation is performed using ANTs to get that corrected data into alignment with the AC-PC aligned T1w image. The second interpolation uses the Lanczos kernel if the --output-resolution is no more than 10% smaller than the resolution of the input data. If you chose to upsample then a BSpline interpolation is used. I'm not sure if there is an AC-PC MNI space.

Is the first interpolation after eddy/topup only used to allow generating the b0 reference? Anyway, are in our case one or two interpolations done? Yes, two interpolations happen in the Eddy version of QSIPrep.

A couple questions for you. What was the output resolution? I don't see it in the comment above. And also why set the slm to none? Lastly, did you check the orientation of the tensors in dtifit? FSL doesn't handle the qsiprep outputs consistently and it's possible you ended up with a y-flip.

WilliamFCB commented 2 years ago

Hi Matt,

No worries. I know you are busy. Many thanks for the detailed feedback.

Concerning the OASIS template fMRIprep refers to " Marcus DS et al. Open Access Series of Imaging Studies (OASIS): Cross-sectional MRI Data in Young, Middle Aged, Nondemented, and Demented Older Adults. Journal of Cognitive Neuroscience 19, 1498–1507 (2007). doi:10.1162/jocn.2007.19.9.1498. [PubMed: 17714011]". However, as OASIS is a data base of brain scans it is not fully clear which brains actually entered the template used in fMRIprep.

Do you happen to know the demographics of the subjects used to generate the template? I could not find any info it

In the Lifebrain project there are different DWI data sets. The output resolution is kept the same as the input resolution

With regard to the -slm setting, we follow the FSL recommendation:

"Second level model" that specifies the mathematical form for how the diffusion gradients cause eddy currents. For high quality data with 60 directions, or more, sampled on the whole sphere we have not found any advantage of performing second level modelling. Hence our recommendation for such data is to use none, and that is also the default.

If the data has quite few directions and/or is has not been sampled on the whole sphere it can be advantageous to specify --slm=linear."

The settings I posted concern high quality data.

Yes, I always check the tensor orientation. If I recall correctly I had to flip over x

Cheers William

cookpa commented 2 years ago

fmriprep's OASIS template is tpl-OASIS30ANTs, main description here https://github.com/templateflow/tpl-OASIS30ANTs/blob/master/template_description.json

The list of included subjects is here

https://figshare.com/articles/dataset/ANTs_ANTsR_Brain_Templates/915436

WilliamFCB commented 2 years ago

Great! many thanks

WilliamFCB commented 2 years ago

For others, who might read this tread the sample used to create tpl-OASIS30ANTs is also described in https://github.com/templateflow/tpl-OASIS30ANTs/blob/master/template_sample.tsv

@mattcieslak, I was a bit surprised by the sample demographics

The template is based on: 15 females: mean age=35.2, SD=22.21, range: 18-90, 7 >= 30 years of age 10 males: mean age=29.9, SD=14.59, range: 20-68, 3 >= 30 years of age

I assume this is ok for the brain extraction, but as a template as such it seems not that balanced/unbiased

I did not look into this, but does QSIprep actually have an easy way of using other templates if one would like to? for example when processing scans from very young or old subjects.

mattcieslak commented 2 years ago

@WilliamFCB my main work right now is on infant data, so this is something we're thinking about a lot. Ultimately I think the T1w processing, skull stripping and spatial normalization should be done outside of qsiprep. There are just too many templates and other good software for that kind of thing. For any non-adult population we've been using the --dwi-only flag and processing the T1ws in fmriprep/freesurfer with an appropriate template. Then the freesurfer data, segmentations, etc are ingressed and used during the reconstruction workflow.

WilliamFCB commented 2 years ago

@mattcieslak I fully agree with you on this. Currently, we used the T1w only for getting the DWIs in ACPC orientation. Partly, because at the time of processing thousends of data sets the dwi-only flag did not work properly. Furthermore, in older kids it is probably more a theoretical then a practical issue as the brain is around 90-95% the size of adults. Anyway, we always have processed T1ws in their own right (freesurfer/samseg/ACAPULCO/....).