JHU-MedImage-Reg / LUMIR_L2R

Baseline methods, training scripts, and pretrained models for LUMIR challenge at Learn2Reg 2024
MIT License
18 stars 2 forks source link

Large Scale Unsupervised Brain MRI Image Registration (LUMIR)

This official repository houses baseline methods, training scripts, and pretrained models for the LUMIR challenge at Learn2Reg 2024.\ The challenge is dedicated to unsupervised brain MRI image registration and offers a comprehensive dataset of over 4000 preprocessed T1-weighted 3D brain MRI images, available for training, testing, and validation purposes.

Please visit learn2reg.grand-challenge.org for more information.

$${\color{red}New!}$$ - 08/14/2024 - Test phase submission is available, see this section!

Dataset:

Baseline methods:

Learning-based models:

Learning-based foundation models:

Optimization-based methods:

Validation dataset results for baseline methods Model Dice↑ TRE↓ (mm) NDV↓ (%) HdDist95↓
TransMorph 0.7594 ± 0.0319 2.4225 0.3509 3.5074
uniGradICON (w/ IO) 0.7512 ± 0.0366 2.4514 0.0001 3.5080
uniGradICON (w/o IO) 0.7369 ± 0.0412 2.5733 0.0000 3.6102
SynthMorph 0.7243 ± 0.0294 2.6099 0.0000 3.5730
VoxelMorph 0.7186 ± 0.0340 3.1545 1.1836 3.9821
SyN (ATNs) 0.6988 ± 0.0561 2.6497 0.0000 3.7048
deedsBCV 0.6977 ± 0.0274 2.2230 0.0001 3.9540
Initial 0.5657 ± 0.0263 4.3543 0.0000 4.7876

Evaluation metrics:

  1. TRE (Code)
  2. Dice (Code)
  3. HD95 (Code)
  4. Non-diffeomorphic volumes (NDV) (Code) See this article published in IJCV, and its associated GitHub papge

Test Phase Submission Guidelines:

The test set consists of 590 images, making it impractical to distribute and collect the deformation fields. As a result, the test set will not be made available to challenge participants. Instead, participants are required to containerize their methods with Docker and submit their Docker containers for evaluation. Your code won't be shared and will be only used internally by the Learn2Reg organizers.

Docker allows for running an algorithm in an isolated environment called a container. In particular, this container will locally replicate your pipeline requirements and execute your inference script.

Detailed instructions on how to build your Docker container is availble at learn2reg.grand-challenge.org/lumir-test-phase-submission/\ We have provided examples and templates for creating a Docker image for submission on our GitHub. You may find it helpful to start with the example Docker submission we created for TransMorph (available here), or you can start from a blank template (available here).

Your submission should be a single .zip file containing the following things:

LUMIR_[your Grand Challenge username]_TestPhase.zip
└── [your docker image name].tar.gz <------ #Your Docker container
└── README.txt                      <------ #A description of the requirements for running your model, including the number of CPUs, amount of RAM, and the estimated computation time per subject.
└── validation_predictions.zip      <------ #A .zip file containing the predicted displacement fields for the validation dataset, ensuring the format adheres to ones outlined at this page.
    ├── disp_3455_3454.nii.gz
    ├── disp_3456_3455.nii.gz
    ├── disp_3457_3456.nii.gz
    ├── disp_3458_3457.nii.gz
    ├── ...
    └── ...

You will need to submit by 31st August 2024

Please choose ONE of the following:

Validation Submission guidelines:

We expect to provide displacement fields for all registrations in the file naming format should be disp_PatID1_PatID2, where PatID1 and PatID2 represent the subject IDs for the fixed and moving images, respectively. The evaluation process requires the files to be organized in the following structure:

folder.zip
└── folder
    ├── disp_3455_3454.nii.gz
    ├── disp_3456_3455.nii.gz
    ├── disp_3457_3456.nii.gz
    ├── disp_3458_3457.nii.gz
    ├── ...
    └── ...

Submissions must be uploaded as zip file containing displacement fields (displacements only) for all validation pairs for all tasks (even when only participating in a subset of the tasks, in that case submit deformation fields of zeroes for all remaining tasks). You can find the validation pairs for in the LUMIR_dataset.json. The convention used for displacement fields depends on scipy's map_coordinates() function, expecting displacement fields in the format [X, Y, Z,[x, y, z]] or [[x, y, z],X, Y, Z], where X, Y, Z and x, y, z represent voxel displacements and image dimensions, respectively. The evaluation script expects .nii.gz files using full-precision format and having shapes 160x224x196x3. Further information can be found here.

Note for PyTorch users: When using PyTorch as deep learning framework you are most likely to transform your images with the grid_sample() routine. Please be aware that this function uses a different convention than ours, expecting displacement fields in the format [X, Y, Z,[x, y, z]] and normalized coordinates between -1 and 1. Prior to your submission you should therefore convert your displacement fields to match our convention.

Citations for dataset usage:

@article{dufumier2022openbhb,
title={Openbhb: a large-scale multi-site brain mri data-set for age prediction and debiasing},
author={Dufumier, Benoit and Grigis, Antoine and Victor, Julie and Ambroise, Corentin and Frouin, Vincent and Duchesnay, Edouard},
journal={NeuroImage},
volume={263},
pages={119637},
year={2022},
publisher={Elsevier}
}

@article{taha2023magnetic,
title={Magnetic resonance imaging datasets with anatomical fiducials for quality control and registration},
author={Taha, Alaa and Gilmore, Greydon and Abbass, Mohamad and Kai, Jason and Kuehn, Tristan and Demarco, John and Gupta, Geetika and Zajner, Chris and Cao, Daniel and Chevalier, Ryan and others},
journal={Scientific Data},
volume={10},
number={1},
pages={449},
year={2023},
publisher={Nature Publishing Group UK London}
}

@article{marcus2007open,
title={Open Access Series of Imaging Studies (OASIS): cross-sectional MRI data in young, middle aged, nondemented, and demented older adults},
author={Marcus, Daniel S and Wang, Tracy H and Parker, Jamie and Csernansky, John G and Morris, John C and Buckner, Randy L},
journal={Journal of cognitive neuroscience},
volume={19},
number={9},
pages={1498--1507},
year={2007},
publisher={MIT Press One Rogers Street, Cambridge, MA 02142-1209, USA journals-info~…}
}

If you have used Non-diffeomorphic volumes in the evaluation of the deformation regularity, please cite the following:

@article{liu2024finite,
  title={On finite difference jacobian computation in deformable image registration},
  author={Liu, Yihao and Chen, Junyu and Wei, Shuwen and Carass, Aaron and Prince, Jerry},
  journal={International Journal of Computer Vision},
  pages={1--11},
  year={2024},
  publisher={Springer}
}