neuronflow / BraTS-Toolkit

Code to preprocess, segment, and fuse glioma MRI scans based on the BraTS Toolkit manuscript.
https://www.frontiersin.org/articles/10.3389/fnins.2020.00125/full
GNU Affero General Public License v3.0
70 stars 11 forks source link

What are the specific preprocessing/standardization steps used for the BraTS challenge? #12

Closed jayurbain closed 2 years ago

jayurbain commented 3 years ago

Hi,

I'm trying to replicate the image standardization steps used in the BraTS challenge. The most recent paper is not specific. For example, interpolation algorithm, normalization prior to registration, order of resizing and interpolation, etc.. The 2015 paper is more specific, but I'm not confident it's still accurate. Even in the 2015 paper, it does not describe how they resize the image to (240,240,155). If someone can point me to the specific steps and algorithms used (or code), I would appreciate it.

For reference, here's a quote from the 2020 BraTS challenge paper: "The provided data are distributed after their harmonization, following standardization pre-processing without affecting the apparent information in the images. Specifically, the pre-processing routines applied in all theBraTS mpMRI scans include co-registration to the same anatomical template [6], interpolation to a uniform isotropic resolution (1mm3), and skull-stripping."

2015 paper: "To homogenize these data we co-registered each subject's image volumes rigidly to the T1c MRI, which had the highest spatial resolution in most cases, and resampled all images to 1 mm isotropic resolution in a standardized axial orientation with a linear interpolator. We used a rigid registration model with the mutual information similarity metric as it is implemented in ITK [74] (“VersorRigid3DTransform” with “MattesMutualInformation” similarity metric and three multi-resolution levels). No attempt was made to put the individual patients in a common reference space. All images were skull stripped [75] to guarantee anomymization of the patients.To homogenize these data we co-registered each subject's image volumes rigidly to the T1c MRI, which had the highest spatial resolution in most cases, and resampled all images to 1 mm isotropic resolution in a standardized axial orientation with a linear interpolator. We used a rigid registration model with the mutual information similarity metric as it is implemented in ITK [74] (“VersorRigid3DTransform” with “MattesMutualInformation” similarity metric and three multi-resolution levels). No attempt was made to put the individual patients in a common reference space. All images were skull stripped [75] to guarantee anomymization of the patients."

Thanks, Jay Urbain

kondratevakate commented 2 years ago

Interested in that as well +

neuronflow commented 2 years ago

Thanks for your interest in BraTS Toolkit (btk) and sorry for the late answer. From my understanding BraTS preprocessing for challenges after 2017 and probably also before happened at cbica. I believe over the years parts of the preprocessing pipeline were swapped e.g. the skull-stripping improved over the years.

From my understanding a "standard" BraTS preprocessing means:

  1. registering images into BraTS space (btk implements this with ANTS; the recent Upen pipeline uses greedy as far as I know)
  2. skullstripping (btk implements HD-BET and as a fallback robex, I am not sure which skullstripper is currently in the official pipeline )
  3. normalization (I saw BraTS images with and without normalization.) // When I train neural networks I usually include my own normalization so I don't mind so much about it.

I am sure Spyros as one of the official BraTS organizers could provide more accurate details.

To answer your question specifically, by registering to the BraTS atlas the image (read "same anatomical template [6]") the images are resized to the "uniform isotropic resolution (1mm3)". Btk first coregisters everything to T1 space (with Bspline interpolation) then registers T1 to BraTS space. Then T1c, T2, FLAIR images are then first transformed to T1 space, from there to BraTS space to save computation time.

The preprocessing module in btk creates a lot additional output including skullstripped images in native space (which are the reason for the above procedure), which might be helpful for other projects/intents.

We discussed and agreed to include the official BraTS preprocessing pipeline into btk, however I am currently busy doing other research projects. I would be happy to accept applications for this project if anyone interested reads here :)

kondratevakate commented 2 years ago

Ok, I am interested)

The preprocessing module in btk creates a lot additional output including skullstripped images in native space (which are the reason for the above procedure), which might be helpful for other projects/intents.

That is exactly the project I am in now)

So if there is everything that is used currently for preprocessing RAW DICOMs is inside the docker (https://hub.docker.com/r/brats/mic-dkfz)?

neuronflow commented 2 years ago

Cool, if I understand you correctly you would be interested in integrating the Upen pipeline into btk?

I have a few deadlines in the upcoming days, but after this I am happy to schedule a call towards the end of the week.

No the above docker image is for tumor segmentation by Fabian Isensee, not preprocessing.

kondratevakate commented 2 years ago

No the above docker image is for tumor segmentation by Fabian Isensee, not preprocessing.

I see

Yeap, lets arrange a call, how can I contact you? or you can just write me to ekaterina.kondrateva(at)skoltech.ru

neuronflow commented 2 years ago

UPDATE: Most BraTS data seems to be preprocessed using this pipeline: https://cbica.github.io/CaPTk/preprocessing_brats.html

Unfortunately I have no experience with it.

@kondratevakate would you be interested to package it into a docker? the btk preprocessing module could then provide this mode providing a consistent UX.

sarthakpati commented 2 years ago

The official BraTS preprocessing pipeline is part of the Cancer Imaging Phenomics Toolkit (CaPTk) and you can find more details here: https://cbica.github.io/CaPTk/preprocessing_brats.html

jayurbain commented 2 years ago

Thanks for your responses. My objective was to augment in-house data with BraTS training data to create a larger training set, and potentially leverage models developed for BraTS. A model trained on BraTS did not segment in house standardized images well and vice versa. Important factors to make this work after registration to T!c and skull stripping were just to resample the image to 1mm^3 voxels using bicubic interpolation followed by centered resizing to BraTS dimensions. This was followed by channel-wise standard normalization.

Unfortunately, the ground ground truth segment masks for the in-house images are not defined the same as the BraTS segment masks.

neuronflow commented 2 years ago

I would consider registering everything to a 1mm isotrope atlas, just as these two pipelines do. Then train your network in this atlas space. You can then morph everything back to native space if you need to.

jayurbain commented 2 years ago

Thanks