neuropoly / totalspineseg

Robust Segmentation and Labeling of Vertebrae, Intervertebral Discs, Spinal Cord, and Spinal Canal in MRI Images Using nnU-Net and Iterative Algorithm.
GNU Lesser General Public License v3.0
20 stars 1 forks source link
3d-medical-images mri nnu-net segmentation segmentation-models spinal-canal spinal-cord spine vertebrae vertebrae-identification vertebrae-segmentation

TotalSpineSeg

DOI

TotalSpineSeg is a tool for automatic instance segmentation of all vertebrae, intervertebral discs (IVDs), spinal cord, and spinal canal in MRI images. It is robust to various MRI contrasts, acquisition orientations, and resolutions. The model used in TotalSpineSeg is based on nnU-Net as the backbone for training and inference.

If you use this model, please cite our work:

Warszawer Y, Molinier N, Valošek J, Shirbint E, Benveniste PL, Achiron A, Eshaghi A and Cohen-Adad J. Fully Automatic Vertebrae and Spinal Cord Segmentation Using a Hybrid Approach Combining nnU-Net and Iterative Algorithm. Proceedings of the 32th Annual Meeting of ISMRM. 2024

Please also cite nnU-Net since our work is heavily based on it:

Isensee, F., Jaeger, P. F., Kohl, S. A., Petersen, J., & Maier-Hein, K. H. (2021). nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods, 18(2), 203-211.

Thumbnail

Table of Contents

Model Description

TotalSpineSeg uses a hybrid approach that integrates nnU-Net with an iterative algorithm for instance segmentation and labeling of vertebrae, intervertebral discs (IVDs), spinal cord, and spinal canal. The process involves two main steps:

Step 1: An nnU-Net model (Dataset101) was trained to identify nine classes in total. This includes four main classes: spinal cord, spinal canal, IVDs, and vertebrae. Additionally, it identifies four specific IVDs: C2-C3, C7-T1, T12-L1, and L5-S, which represent key anatomical landmarks along the spine, as well as the C1 vertebra to determine whether the MRI images include C1 (Figure 1B). The output segmentation was processed using an iterative algorithm to extract individual IVDs (Figure 1C), from which the odd IVDs segmentation was extracted (Figure 1D).

Step 2: A second nnU-Net model (Dataset102) was trained to identify ten classes in total. This includes five main classes: spinal cord, spinal canal, IVDs, odd vertebrae, and even vertebrae. Additionally, it identifies four specific IVDs: C2-C3, C7-T1, T12-L1, and L5-S, which represent key anatomical landmarks along the spine, as well as the sacrum (Figure 1E). This model uses two input channels: the MRI image (Figure 1A) and the odd IVDs extracted from the first step (Figure 1D). The output segmentation was processed using an algorithm that assigns individual labels to each vertebra and IVD in the final segmentation (Figure 1F).

For comparison, we also trained a single model (Dataset103) that outputs individual label values for each vertebra and IVD in a single step.

Figure 1

Figure 1: Illustration of the hybrid method for automatic segmentation of spinal structures. (A) MRI image used to train the Step 1 model. (B) The Step 1 model outputs nine classes. (C) Individual IVDs extracted from the output labels. (D) Odd IVDs extracted from the individual IVDs. (E) MRI image and odd IVDs used as inputs to train the Step 2 model, which outputs ten classes. (F) Final segmentation with individual labels for each vertebra and IVD.

Datasets

The totalspineseg model was trained on these 3 main datasets:

We used manual labels from the SPIDER dataset. For other datasets, we generated initial labels by registering MRIs to the PAM50 template using Spinal Cord Toolbox (SCT). We trained an initial segmentation model with these labels, applied it to the datasets, and manually corrected the outputs using 3D Slicer.

Additional public datasets were used during this project to generate sacrum segmentations:

When not available, sacrum segmentations were generated using the totalsegmentator model. For more information, please see this issue.

Dependencies

Installation

  1. Open a bash terminal in the directory where you want to work.

  2. Create the installation directory:

    mkdir TotalSpineSeg
    cd TotalSpineSeg
  3. Create and activate a virtual environment (highly recommended):

    python3 -m venv venv
    source venv/bin/activate
  4. Clone and install this repository:

    git clone https://github.com/neuropoly/totalspineseg.git
    python3 -m pip install -e totalspineseg
  5. For CUDA GPU support, install PyTorch following the instructions on their website. Be sure to add the --upgrade flag to your installation command to replace any existing PyTorch installation. Example:

     python3 -m pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 --upgrade
  6. Set the path to TotalSpineSeg and data folders in the virtual environment:

    mkdir data
    export TOTALSPINESEG="$(realpath totalspineseg)"
    export TOTALSPINESEG_DATA="$(realpath data)"
    echo "export TOTALSPINESEG=\"$TOTALSPINESEG\"" >> venv/bin/activate
    echo "export TOTALSPINESEG_DATA=\"$TOTALSPINESEG_DATA\"" >> venv/bin/activate

Note: If you pull a new version from GitHub, make sure to reinstall the package to apply the updates using the following command:

python3 -m pip install -e $TOTALSPINESEG --upgrade

Training

To train the TotalSpineSeg model, you will need the following hardware specifications:

Please ensure that your system meets these requirements before proceeding with the training process.

  1. Make sure that the bash terminal is opened with the virtual environment (if used) activated (using source <path to installation directory>/venv/bin/activate).

  2. Ensure training dependencies are installed:

    apt-get install git git-annex jq -y
  3. Download the required datasets into $TOTALSPINESEG_DATA/bids (make sure you have access to the specified repositories):

    bash "$TOTALSPINESEG"/scripts/download_datasets.sh
  4. Temporary step (until all labels are pushed into the repositories) - Download labels into $TOTALSPINESEG_DATA/bids:

    curl -L -O https://github.com/neuropoly/totalspineseg/releases/download/labels/labels_iso_bids_0924.zip
    unzip -qo labels_iso_bids_0924.zip -d "$TOTALSPINESEG_DATA"
    rm labels_iso_bids_0924.zip
  5. Prepare datasets in nnUNetv2 structure into $TOTALSPINESEG_DATA/nnUnet:

    bash "$TOTALSPINESEG"/scripts/prepare_datasets.sh [DATASET_ID] [-noaug]

    The script optionally accepts DATASET_ID as the first positional argument to specify the dataset to prepare. It can be either 101, 102, 103, or all. If all is specified, it will prepare all datasets (101, 102, 103). By default, it will prepare datasets 101 and 102.

    Additionally, you can use the -noaug parameter to prepare the datasets without data augmentations.

  6. Train the model:

    bash "$TOTALSPINESEG"/scripts/train.sh [DATASET_ID [FOLD]]

    The script optionally accepts DATASET_ID as the first positional argument to specify the dataset to train. It can be either 101, 102, 103, or all. If all is specified, it will train all datasets (101, 102, 103). By default, it will train datasets 101 and 102.

    Additionally, you can specify FOLD as the second positional argument to specify the fold. It can be either 0, 1, 2, 3, 4, 5 or all. By default, it will train with fold 0.

Inference

  1. Make sure that the bash terminal is opened with the virtual environment (if used) activated (using source <path to installation directory>/venv/bin/activate).

  2. Run the model on a folder containing the images in .nii.gz format, or on a single .nii.gz file:

    totalspineseg INPUT OUTPUT_FOLDER [--step1] [--iso]

    This will process the images in INPUT or the single image and save the results in OUTPUT_FOLDER. If you haven't trained the model, the script will automatically download the pre-trained models from the GitHub release.

    Important Note: By default, the output segmentations are resampled back to the input image space. If you prefer to obtain the outputs in the model's original 1mm isotropic resolution, especially useful for visualization purposes, we strongly recommend using the --iso argument.

    Additionally, you can use the --step1 parameter to run only the step 1 model, which outputs a single label for all vertebrae, including the sacrum.

    For more options, you can use the --help parameter:

    totalspineseg --help

Output Data Structure:

output_folder/
├── input/                   # Preprocessed input images
├── preview/                 # Preview images for all steps
├── step1_raw/               # Raw outputs from step 1 model
├── step1_output/            # Results of iterative labeling algorithm for step 1
├── step1_cord/              # Spinal cord soft segmentations
├── step1_canal/             # Spinal canal soft segmentations
├── step1_levels/            # Single voxel in canal centerline at each IVD level
├── step2_raw/               # Raw outputs from step 2 model
└── step2_output/            # Results of iterative labeling algorithm for step 2 (final output)

Important Note: While TotalSpineSeg provides spinal cord segmentation, it is not intended to replace validated methods for cross-sectional area (CSA) analysis. The spinal cord segmentation from TotalSpineSeg has not been validated for CSA measurements, nor has it been tested on cases involving spinal cord compressions, MS lesions, or other spinal cord abnormalities. For accurate CSA analysis, we strongly recommend using the validated algorithms available in the Spinal Cord Toolbox.

Key points:

Localizer based labeling

TotalSpineSeg supports using localizer images to improve the labeling process, particularly useful for images with different fields of view (FOV) where landmarks like C1 and sacrum may not be visible. It uses localizer information to accurately label vertebrae and discs in the main image.

Localizer

Example of directory structure:

.
├── images/
│   ├── sub-01_T2w.nii.gz
│   └── sub-02_T2w.nii.gz
└── localizers/
    ├── sub-01_T1w.nii.gz
    └── sub-02_T1w.nii.gz

In this example, main images are placed in the images folder and corresponding localizer images in the localizers folder.

To use localizer-based labeling:

# Process localizer images. We recommend using the --iso flag for the localizer to ensure consistent resolution.
totalspineseg localizers localizers_output --iso

# Run model on main images using localizer output
totalspineseg images output --loc localizers_output/step2_output --suffix _T2w --loc-suffix _T1w

Note: If the localizer and main image files have the same names, you can omit the --suffix and --loc-suffix arguments.

Results

TotalSpineSeg demonstrates robust performance across a wide range of imaging parameters. Here are some examples of the model output:

Model Output Preview

The examples shown above include segmentation results on various contrasts (T1w, T2w, STIR, MTS, T2star, and even CT images), acquisition orientations (sagittal, axial), and resolutions.

List of Classes

Label Name
1 spinal_cord
2 spinal_canal
11 vertebrae_C1
12 vertebrae_C2
13 vertebrae_C3
14 vertebrae_C4
15 vertebrae_C5
16 vertebrae_C6
17 vertebrae_C7
21 vertebrae_T1
22 vertebrae_T2
23 vertebrae_T3
24 vertebrae_T4
25 vertebrae_T5
26 vertebrae_T6
27 vertebrae_T7
28 vertebrae_T8
29 vertebrae_T9
30 vertebrae_T10
31 vertebrae_T11
32 vertebrae_T12
41 vertebrae_L1
42 vertebrae_L2
43 vertebrae_L3
44 vertebrae_L4
45 vertebrae_L5
50 sacrum
63 disc_C2_C3
64 disc_C3_C4
65 disc_C4_C5
66 disc_C5_C6
67 disc_C6_C7
71 disc_C7_T1
72 disc_T1_T2
73 disc_T2_T3
74 disc_T3_T4
75 disc_T4_T5
76 disc_T5_T6
77 disc_T6_T7
78 disc_T7_T8
79 disc_T8_T9
80 disc_T9_T10
81 disc_T10_T11
82 disc_T11_T12
91 disc_T12_L1
92 disc_L1_L2
93 disc_L2_L3
94 disc_L3_L4
95 disc_L4_L5
100 disc_L5_S