nipreps / smriprep

Structural MRI PREProcessing (sMRIPrep) workflows for NIPreps (NeuroImaging PREProcessing tools)
https://nipreps.github.io/smriprep
Apache License 2.0
128 stars 38 forks source link

MP2RAGE skull-stripping #18

Open oesteban opened 6 years ago

oesteban commented 6 years ago

Make MP2RAGE amenable to antsBrainExtraction.sh. One possibility, as @chrisfilo mentioned in #231 is using the inversion echoes https://github.com/poldracklab/pydeface/pull/15.

chrisgorgo commented 6 years ago

Another option is using a skullstripping procedure designed for MP2RAGE such as the one implemented in nihighres

skullstrip_results = nighres.brain.mp2rage_skullstripping(
second_inversion=dataset["inv2"],
t1_weighted=dataset["t1w"],
t1_map=dataset["t1map"],
save_data=True, output_dir=out_dir,
file_name="sub001_sess1")

More info at https://doi.org/10.1093/gigascience/giy082

tknapen commented 6 years ago

I'm working on this for a 7T dataset we have, for which the only anatomicals are MP2RAGE images. My workflow is to take the magnitude image of the second inversion, and thresholding that after running it through ANTS' N4BiasFieldCorrection. This creates a mask that I apply to the _T1w.nii.gz image. For now, I'm determining the threshold by eye, but it should be trivial to automate that (http://nilearn.github.io/modules/generated/nilearn.masking.compute_background_mask.html#nilearn-masking-compute-background-mask for example). Should I work on a pull request, working from the BIDS MP2RAGE extension proposal (https://groups.google.com/forum/#!topic/bids-discussion/wtolT5qPjy0), or are you already working on a similar feature (as this issue is from 7 months ago)?

chrisgorgo commented 6 years ago

A pull request would be awesome (I am not currently working on this). Assuming the data will be formatted according to BEP001 is the way to go. Thank you in advance for your contribution!

marcpabst commented 5 years ago

Is there anyone working on this? As far as I understand, there are at least two viable options to work with mp2rage images:

(1) use the second inversion image (INV2) to create a brain mask and then apply that mask to the uniform image (UNI) - basically getting rid of the background noise that irritates freesurfer. (2) using a noise suppression approach described here . There are already some implementations that work well - e. g. by Jose Marques and even a python-based one (which I have not tried yet).

Are there there any suggestions / experience regarding which of the two is the better or more reliable option? I'm happy to work on a PR but would like some input first.

oesteban commented 5 years ago

I @marcpabst, your comment is the most memorable advance there's been on this line. In other words, contributions are very welcome!

Regarding which method is preferable, I honestly don't have the experience to tell you what alternative looks best.

With (1) I'd be careful that BIDS mandates the INV2 image is present in the dataset. If not, then this implementation will only serve for a few edge cases.

For (2), I haven't seen any examples of the brain extraction problem in particular within the @Gilles86's repo, but it was a very fast look.

marcpabst commented 5 years ago

You're right. @Gilles86's repo does not contain the specific implementation, my mistake. However, I already created a python implementation for testing purposes.

For mp2rage datasets, there should be always a second inversion present. In fact, both approaches depend on it - there is basically no way (I know of) to get away without it. I will have to look at BEP001 in detail, but I suspect this shouldn't be a problem.

As far as I know there are at least two flavors of MP2RAGE file structures you can get from a scanner:

(a) two magnitude images (one for each inversion) and a unified T1w image Although the approach described by O'Brien et al. based on raw data, @JosePMarques's implementation can be used to create a denoised unified T1w image from two magnitude and a unified T1w image. (b) a phase and a magnitude image for each the first and second inversion These images can then be used to create a unified T1w image or a denoised unified T1w image using the methods proposed by Marques et al. and O'Brien et al. respectively.

Regarding the choice of methods, it looks like both the usage of (1) and (2) is reported in literature, although the masking approach seems to be more popular - which obviously doesn't mean it's the superior option. Maybe @JosePMarques can comment?

I'm also not sure how much pre-preprocessing should be expected. Are users required to provide a unified T1w image (if the scanner doesn't provide one) and should the pipeline take care of that if it's missing? (parts of this will probably depend on the BIDS specification)

Gilles86 commented 5 years ago

Hey guys,

A few cents: a) A group of people, including myself, is currently making a BIDS extension that includes MP2RAGE-images. See here for the extension: https://github.com/bids-standard/bep001https://github.com/bids-standard/bep001

I think especially these two examples are useful:

Note that we want to rename the suffix _T1w for the UNI-image to _T1UNI, since it is a very different contrast and they almost never work as input to traditional pipelines.

B) I have collected MP2RAGEs both on Siemens and Philips-systems. On both you always got access to the INV1 and INV2 images, both magnitude and phase. I think MP2RAGEs are currently "somewhat" of an edge case themselves, but when you collect them, it would really be a waste to throw the INV1 and INV2 images away.

C) Note that the 'Robust'/'Regularizing' approach of O'Brien et al. also reintroduces the B1 bias field that you use the MP2RAGE to get rid off in-the-first place. If you would use the regularizedcd MP2RAGE image to feed into brain stripping algorithms, it would be good to then use that mask on the original, unregularized MP2RAGE image as input to subsequent segmentation steps.

D) @marcpabst , that Python implentation looks great! Would you be willing to incorporate your code in my pymp2rage-package with a pull request, or can I do it otherwise?

E) I have been struggling with skull-stripping MP2RAGEs for 2+ years now and I still haven't found a recipe that works without manual intervention. Having said that, my standard are very high, since I try to do sub-millimeter 7T fMRI. If you want to see what I currently do, have a look here: https://github.com/VU-Cog-Sci/mp2rage_preprocessing

Part of the trick is using the nighres-wrapper of CBS-tools for getting rid of the dura and incorporating manual corrections. Otherwise it's mostly approach B of @marcpabst:

(1) use the second inversion image (INV2) to create a brain mask and then apply that mask to the uniform image (UNI) - basically getting rid of the background noise that irritates freesurfer.

It is important to bias-field correct this INV2 image before you do that though!

Cheers, Gillles

marcpabst commented 5 years ago

Hi Gillles,

thanks for your detailed and helpful comment.

Note that we want to rename the suffix _T1w for the UNI-image to _T1UNI, since it is a very different contrast and they almost never work as input to traditional pipelines.

That's a good idea to avoid confusion, making sure that one cannot expect a familiar-looking "normal" T1w image.

D) @marcpabst , that Python implentation looks great! Would you be willing to incorporate your code in my pymp2rage-package with a pull request, or can I do it otherwise?

I will gladly do so.

C) Note that the 'Robust'/'Regularizing' approach of O'Brien et al. also reintroduces the B1 bias field that you use the MP2RAGE to get rid off in-the-first place.

O'Brien et al. described the introduced homogeneity as "mild", whatever that means. I was also wondering about possible noise within the brain and if that would be problematic in regards to further processing steps like segmentation.

If you would use the regularizedcd MP2RAGE image to feed into brain stripping algorithms, it would be good to then use that mask on the original, unregularized MP2RAGE image as input to subsequent segmentation steps.

I think that's the way to go for our own purposes. Are there any general caveats with this approach compared to the option you chose?

Regards, Marc

Gilles86 commented 5 years ago

Hey Marc,

I will gladly do so.

Cool!

I think that's the way to go for our own purposes. Are there any general caveats with this approach compared to the option you chose?

I think the choice of regularization parameter is very important: when you regularize too aggressively, you can potentially "eat" parts of the brain away and/or reintroduce strong inhomogenieties. In my experience, it goes wrong exactly where you don't want it to go wrong: at the CSF/GM-boundary (maybe partial volume effects?).

Another issue is that on standard PD-weighted INV2-images, the sagittal sinus has exactly the same intensity as the gray matter. Again, most brain stripping/segmentation algorithms then include it as gray matter. The solution we used is to acquire MP2RAGE-MEs, with a multi-echo INV2. The sagittal sinus, unlike gray matter, loses signal at later echoes due to T2*-effects.

Finally, at (submillimeter) MP2RAGE you start seeing the dura very well and Freesurfer has the tendency to include it into the Gray matter Mask. So inspect your surface reconstructions very carefully, especially the occipital pole.

This internal presentation I gave might interest you: https://www.dropbox.com/s/o7hcb02xo5wybdq/20181214%20mp2rage%20segmentation.pdf?dl=0

Just out of curiosity: Can I ask what kind of data you are working with? These issues are all less pressing when you just stay in the 1mm regime.

marcpabst commented 5 years ago

Hi Gilles,

thanks for your thorough explanation and also for providing me with your great presentation.

I'm just beginning to get a grasp of this stuff - I'm a psychology undergraduate and currently also a research intern at the Max-Planck-Institute in Leipzig so I really hoping that I'm not annoying you too much.

The functional dataset I'm working with uses mprage images for (pre)processing if possible, but for some probands, there are only mp2rage scans available. So we don't really care about sub-millimeter resolution (although the people who originally acquired them probably do), but basically want to use mp2rage images as we do with the mprage images. So far, we've used the 'creating-a-brain-mask-from-INV2'--approach - it works quite well but on close examination it suffers from some of the problems you mentioned (e.g. poor contrast between tissues for INV1) so I wondered if there might be a better option.

Cheers, Marc

JosePMarques commented 5 years ago

Hi Marc and Gilles,

I haven't put much effort in terms of comparing methods to better perform brain extraction. I was recently contacted by Benoit Beranger @benoitberanger that has made an interative spm implementation of my regularization code (https://github.com/benoitberanger/mp2rage) and he claims that SPM is always able to perform brain extraction successfully using his data.

I agree with Gilles worry that when you over regularize, you get the bias field back in the image... on the other hand MPRAGEs have always had bias problems, and most segmentation software expects this bias to exist.

So, my standard recommendation is to use the regularization for brain masking and co-registration purposes. Once it is about using freesurfer, getting cortical thickness or cortical maps, use the previously derived brain mask and apply it to the T1 or R1 maps (CBS tools prefers T1 maps, freesurfer prefers R1 maps). Note that ideally this T1 and R1 maps should have been corrected for what is called transmit field B1 inhomogeneity (you can find code to do this in my github). If you don't use it and your protocol is very transmit field sensitive you can get problems as the one described in this paper: https://onlinelibrary.wiley.com/doi/full/10.1002/hbm.24011).

Hope this helps...

Jose

benoitberanger commented 5 years ago

Hello everyone,

In our lab, we like the segmentation from CAT12, the extension of SPM12. After visual checks, we found that this segmentation pipeline gives us good estimates of GM & WM compartments for the standard MPRAGE. However, for MP2RAGE, CAT12 segmentation is not visually accurate when it's performed on the INV2 (close from T1 contrast) or UNI image. For example, the GM compartment leaks into the skull or air cavities, where there is the "salt and pepper" noise.

Using @JosePMarques https://github.com/JosePMarques/MP2RAGE-related-scripts to remove the noise background, we found the CAT12 segmentation have a lot less leakage. Still with visual inspection. However, It's not perfect.

Best, Benoît

oesteban commented 3 years ago

@satra, is kwyk working on MP2RAGE?

satra commented 3 years ago

i don't know the answer - at present as long as it looks like a T1 it works. however, if you are just looking for skull-stripping, then we have found HD-BET to be fairly robust across scan types. the trained model is available for use.