ANTsX / ANTsPyNet

Pre-trained models and utilities for deep learning on medical images in Python
https://antspynet.readthedocs.io
Apache License 2.0
201 stars 29 forks source link

Publication describing Brain Extraction Training #95

Closed gladomat closed 6 months ago

gladomat commented 8 months ago

Hi, is there a publication detailing the training for the T1 / FLAIR brain extraction U-net?

ncullen93 commented 8 months ago

This publication is the closest I know of - https://www.nature.com/articles/s41598-021-87564-6

It includes a lot of good references surrounding that model. We are working on expanding descriptions of the trained models and making them easier to find. Thanks for the question.

gladomat commented 8 months ago

Thanks for the quick reply. I've read that paper, but I haven't understood what kind of training data you used, what the ground truth / labels were, how you came to the labels, etc. The paper explains the cortical thickness pipeline, but not the brain extraction pipeline. That would be of great interest to me.

ntustison commented 8 months ago

Here's where I store my training scripts. This is primarily for personal use at the moment but if people are interested, they are certainly welcome to view/use them.

ncullen93 commented 8 months ago

Such a great resource. Quick question - what is the purpose of the center of mass transform in the batch generator for T1 brain extraction? Is this like basically a quick translation-only registration to the template?

center_of_mass_template = ants.get_center_of_mass(template)
center_of_mass_image = ants.get_center_of_mass(image)
translation = tuple(np.array(center_of_mass_image) - np.array(center_of_mass_template))
xfrm = ants.create_ants_transform(transform_type=
    "Euler3DTransform", center = center_of_mass_template,
    translation=translation,
    precision='float', dimension=image.dimension)

imageX = ants.apply_ants_transform_to_image(xfrm, image, template)
ntustison commented 8 months ago

I do that in several networks and the purpose is to simply ensure proper head orientation.

gladomat commented 8 months ago

Thanks! That's a great help. How did you get the ground truth? From what I gather, you used cortical thickness data and the oasis data

mask_images_1 = glob.glob(base_data_directory + "CorticalThicknessData2014/*/*HeadMask.nii.gz")
mask_images_2 = glob.glob(base_data_directory + "Oasis3BrainExtractionProcessed/*/*/*/*ants_HeadMask.nii.gz")

Do you have some more info on these datasets? Did you extract the brains using another tool for these two datasets? How many brains in total were there?

ntustison commented 8 months ago

We explain in the Sci Rep paper that the source of the data is from our 2014 paper although we've since refined the training incorporating other datasets.

gladomat commented 8 months ago

Thanks again for the paper! I've come across problems with the brain extraction. Here are some examples. T1 image

FLAIR (different patient) image

I assume that the large ventricles are confusing the model, albeit it doesn't happen with a T1 image from the same patient.

Any ideas on how to fix this problem?

Also, would you mind sharing the information about what additional data you used in the refined training (samples, origin, healthy or not or both)?

ntustison commented 8 months ago

Also, would you mind sharing the information about what additional data you used in the refined training (samples, origin, healthy or not or both)?

Probably ADNI and some other subjects that were found similarly to the way you discovered particular datasets that didn't work. Obviously I didn't keep track of information such as which particular subject from which particular dataset was used for the follow-up refinement. Most crucial to performance for building a brain extraction model, imo, is having a good representation of the different styles of "defacing" typically associated with datasets and the data augmentation. Abnormally large ventricles, and the associated large variance of shape, is also important to capture in the network but that's difficult to artificially generate through additional data augmentation and would just tend to incorporate those additional subjects as I collected them.

For specific cases, I would need to see more than just a picture and actually test performance on my end.

Additionally, we having a better distribution of training data for the T1 model, so if I had a T1/flair combination for a subject, I would do brain extraction in T1 space and use registration (between the whole head T1 and flair) to get the brain mask for the flair.

gladomat commented 8 months ago

Thanks for the details regarding the process! It's much appreciated. So de-facing poses a problem, that's good to know. ADNI data has the face intact and these encompass many brains already, but the AD patients do have abnormally large verntricles.

The suggestion T1 first, then use the brain mask on FLAIR is what I also thought about, since it's the easiest solution for the moment.