HITLAB-DeepIGeoS / DeepIGeoS

Pytorch Implementation of DeepIGeoS
30 stars 6 forks source link

:brain: DeepIGeoS

Implementation the DeepIGeoS Paper

:page_facing_up: DeepIGeoS: A Deep Interactive Geodesic Framework for Medical Image Segmentation (2018)

additional page : Notion page (in Korean :kr:)

👨🏻‍💻 Contributors

이영석, 이주호, 이준호, 정경중, 손소연, 조현우

:mag: Prerequisites

Please check environments and requirements before you start. If required, we recommend you to either upgrade versions or install them for smooth running.

Ubuntu Python PyTorch Qt

☺︎ Environments

Ubuntu 16.04
Python 3.7.11

☺︎ Requirements

dotmap
GeodisTK
opencv-python
tensorboard
torch
torchio
torchvision
tqdm
PyQt5

:film_strip: Datasets

Download the BraTS 2021 dataset form web BraTS 2021 using load_datasets.sh.

$ bash load_datasets.sh

:computer: Train

☺︎ P-Net

$ python train_pnet.py -c configs/config_pnet.json

☺︎ R-Net

$ python train_rnet.py -c configs/config_rnet.json

☺︎ Tensorboard

$ tensorboard --logdir experiments/logs/

:computer: Run

☺︎ Simple QT Application

To operate DeepIGeos with simple mouse click interaction, We create a QT based application. You can just run DeepIGeoS with the main code titled 'main_deepigeos.py' as shown below.

$ python main_deepigeos.py

:dna: Results

☺︎ with the Simulated Interactions

After the simulated user interaction, the result follows rules below :

Simulations were generated on three slices with the largest mis-segments in each axes; sagittal, coronal, and axial.

  1. To find mis-segmented regions, the automatic segmentations by P-net are compared with the ground truth.
  2. Then the user interactions on the each mis-segmented region are simulated by n pixels, which is randomly sampled, in the region. (Suppose the size of one connected under-segmented or over-segmentedregion is Nm, we set n for that region to 0 if Nm < 30 and [Nm/100] otherwise)
Sagittal Coronal Axial

☺︎ with the User Interaction

Results with user interactions are below :

https://user-images.githubusercontent.com/36390128/159625917-1528c01b-fd4a-4fcc-9f20-f6d070fd4822.mp4

  1. Click LOAD IMG button to load image
  2. Click P-NET button to operate automatic segmentation
  3. Check the automatic segmentation results
  4. Point where to refine results by mouse click
    • Circle : Under-segmented region
    • Square : Over-segmented region
  5. Click R-NET button to operate refinement
  6. Check refined segmentation results
pnet_mask rnet_mask gt_mask

:page_facing_up: Background of DeepIGeoS

For the implementation of the DeepIGeoS paper, all steps, we understood and performed are described in the following subsections.

☺︎ Abstract

☺︎ Architecture

architecture

https://arxiv.org/abs/1707.00652

Two stage pipeline : P-Net(obtains automatically initial segmentation) + R-Net(refines initial segmentation w/ small # of user interactions that we encode as geodesic distance maps)

☺︎ CRF-Net

crf-rnn

https://arxiv.org/abs/1502.03240

The CRF-Net(f) is connected to P-Net and also the CRF-Net(fu) is connected to R-Net.

☺︎ Geodesic Distance Maps

geodesicmap

https://towardsdatascience.com/preserving-geodesic-distance-for-non-linear-datasets-isomap-d24a1a1908b2

The interactions with the same label are converted into a distance map.

☺︎ BraTS Dataset

brats

https://arxiv.org/pdf/2107.02314.pdf

We only consider T2-FLAIR(panel C) images in the BraTS 2021 and segment the whole tumor.