himashi92 / VT-UNet

[MICCAI2022] This is an official PyTorch implementation for A Robust Volumetric Transformer for Accurate 3D Tumor Segmentation
MIT License
244 stars 32 forks source link
3d-medical-imaging-segmentation brats-dataset deep-learning mri pure-transformer pytorch pytorch-implementation segmentation semantic-segmentation transformer tumor-segmentation vision-transformer volumetric-transformer vtunet

VT-UNet

This repo contains the supported pytorch code and configuration files to reproduce 3D medical image segmentaion results of VT-UNet.

VT-UNet Architecture

Our previous Code for A Volumetric Transformer for Accurate 3D Tumor Segmentation can be found iside version 1 folder.

VT-UNet: A Robust Volumetric Transformer for Accurate 3D Tumor Segmentation

Parts of codes are borrowed from nn-UNet.

System requirements

This software was originally designed and run on a system running Ubuntu.

Dataset Preparation

Pre-trained weights

Create Environment variables

vi ~/.bashrc

source ~/.bashrc

Environment setup

Create a virtual environment

Install torch

Install other dependencies

Preprocess Data

cd VTUNet

pip install -e .

Train Model

cd vtunet

Test Model

cd /home/VTUNet/DATASET/vtunet_raw/vtunet_raw_data/vtunet_raw_data/Task003_tumor/

Trained model Weights

Acknowledgements

This repository makes liberal use of code from open_brats2020, Swin Transformer, Video Swin Transformer, Swin-Unet, nnUNet and nnFormer

References

Citing VT-UNet

    @inproceedings{peiris2022robust,
      title={A Robust Volumetric Transformer for Accurate 3D Tumor Segmentation},
      author={Peiris, Himashi and Hayat, Munawar and Chen, Zhaolin and Egan, Gary and Harandi, Mehrtash},
      booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
      pages={162--172},
      year={2022},
      organization={Springer}
    }