SlicerDMRI / SupWMA

[MedIA 2023] "Superficial White Matter Analysis: An Efficient Point-cloud-based Deep Learning Framework with Supervised Contrastive Learning for Consistent Tractography Parcellation across Populations and dMRI Acquisitions", Medical Image Analysis
https://supwma.github.io
Other
22 stars 5 forks source link

Overview

---You are on main branch---

This repository has two branches: main and ISBI. main branch is for our work at Medical Image Analysis. ISBI branch is for our work at ISBI 2022 (finalist for best paper award).

SupWMA -- MedIA

This repository releases the source code, pre-trained model and testing sample for the work, "Superficial White Matter Analysis: An Efficient Point-cloud-based Deep Learning Framework with Supervised Contrastive Learning for Consistent Tractography Parcellation across Populations and dMRI Acquisitions", which is accepted by Medical Image Analysis.

Overview_v2

License

The contents of this repository are released under an Slicer license.

Dependencies:

conda create --name SupWMA python=3.6.10

conda activate SupWMA

pip install conda install pytorch==1.7.0 torchvision==0.8.0 torchaudio==0.7.0 cudatoolkit=10.1 -c pytorch

pip install git+https://github.com/SlicerDMRI/whitematteranalysis.git

pip install h5py

pip install sklearn

Train two-stage model with contrastive learning

Train with our dataset

  1. Download TrainData_TwoStage.tar.gz (https://github.com/SlicerDMRI/SupWMA-TrainingData/releases) to ./, and tar -xzvf TrainData_TwoStage.tar.gz
  2. Run sh train_supwma_s1.sh && sh train_supwma_s2.sh

Train using your custom dataset

Your input streamline features should have size of (number_streamlines, number_points_per_streamline, 3), and size of labels is (number_streamlines, ). You can save/load features and labels using .h5 files.

You can start training your custom dataset using the ISBI branch, where we provide the training code for one-stage training with and without contrastive learning. The training code in main branch is for two-stage training with contrastive learning.

Train/Val results

We calculated the accuracy, precision, recall and f1 on 198 swm clusters and one "non-swm" cluster (199 classes). One "non-swm" cluster consists of swm outlier clusters and others (dwm).

Test (SWM parcellation)

Highly recommended to use the pre-trained model here (two-stage with contrastive learning) to parcellate your own data.

  1. Install 3D Slicer (https://www.slicer.org) and SlicerDMRI (http://dmri.slicer.org).
  2. Download TrainedModels_TwoStage.tar.gz (https://github.com/SlicerDMRI/SupWMA/releases) to ./, and tar -xzvf TrainedModel_TwoStage.tar.gz
  3. Download TestData.tar.gz (https://github.com/SlicerDMRI/SupWMA/releases) to the ./, and tar -xzvf TestData.tar.gz
  4. Run sh SupWMA_TwoStage.sh

Test parcellation Results

Vtp files of 198 superficial white matter clusters and one Non-SWM cluster are in ./SupWMA-TwoStage_parcellation_results/[subject_id]/[subject_id]_prediction_clusters_outlier_removed.

You can visualize them using 3D Slicer.

SWM_results

References

Please cite the following papers for using the code and/or the training data :

Tengfei Xue, Fan Zhang, Chaoyi Zhang, Yuqian Chen, Yang Song, Alexandra J. Golby, Nikos Makris, Yogesh Rathi, Weidong Cai, and   
Lauren J. O’Donnell. 2023. “Superficial White Matter Analysis: An Efficient Point-Cloud-Based Deep Learning Framework with  
Supervised Contrastive Learning for Consistent Tractography Parcellation across Populations and dMRI Acquisitions.” 
Medical Image Analysis 85: 102759.

Tengfei Xue, Fan Zhang, Chaoyi Zhang, Yuqian Chen, Yang Song, Nikos Makris, Yogesh Rathi, Weidong Cai, and Lauren J. O’Donnell. 
Supwma: Consistent and Efficient Tractography Parcellation of Superficial White Matter with Deep Learning.
In 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI), IEEE, 2022.

Zhang, F., Wu, Y., Norton, I., Rathi, Y., Makris, N., O'Donnell, LJ. 
An anatomically curated fiber clustering white matter atlas for consistent white matter tract parcellation across the lifespan. 
NeuroImage, 2018 (179): 429-447

For projects using Slicer and SlicerDMRI please also include the following text (or similar) and citations: