kylekma / X2CT

GNU Lesser General Public License v3.0
141 stars 47 forks source link

X2CT-GAN: Reconstructing CT from Biplanar X-Rays with Generative Adversarial Networks

Introduction


This is the official code release of the 2019 CVPR paper X2CT-GAN: Reconstructing CT from Biplanar X-Rays with Generative Adversarial Networks. In the original paper, we proposed a novel method to reconstruct CT from two orthogonal X-Ray images using the generative adversarial network (GAN). A specially designed generator network is exploited to increase data dimension from 2D (X-Rays) to 3D (CT), which has not been addressed in previous research works. In this code release, we provide the complete source codes, trained models and related LIDC data that are used in our experiments, which may help you validate our method as well as several baselines. You can also use the source code to process the data and retrain all the networks.

License

This work is released under the GPLv3 license (refer to the LICENSE file for more details).

Citing our work

@InProceedings{Ying_2019_CVPR,
author = {Ying, Xingde and Guo, Heng and Ma, Kai and Wu, Jian and Weng, Zhengxin and Zheng, Yefeng},
title = {X2CT-GAN: Reconstructing CT From Biplanar X-Rays With Generative Adversarial Networks},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}}

Contents


  1. Requirements
  2. Installation
  3. Code Structure
  4. Demo
  5. Results
  6. TODO
  7. Acknowledgement

Requirements


  1. pytorch>=0.4 versions had been tested
  2. python3.6 was tested
  3. python dependencies, please see the requirements.txt file
  4. CUDA8.0 and cudnn7.0 had been tested

Installation


Structure


CTGAN/:
   |--data/:folder include all the preprocessed data and train/test split in our original experiment
   |    |--LIDC-HDF5-256/:include the raw data .h5 file
   |    |--train.txt:training file list
   |    |--test.txt:test file list
   |
   |--experiment/: experiment configuration folder
   |    |--multiView2500/: multiview experiment configuration file
   |    |--singleView2500/: singleview experiment configuration file
   |
   |--lib/:folder include all the dependency source codes
   |    |--config/: folder includes the config file
   |    |--dataset/: folder includes the source code to process the data
   |    |--model/: folder includes the network definitions and loss definitions
   |    |--utils/: utility functions
   |
   |--save_models/: folders include trained modesl from us
   |    |--multiView_CTGAN/: single view X-Ray to CT model
   |    |--singleView_CTGAN/: Biplanar X-Rays to CT model
   |
   |--test.py: test script that demonstrates the inference workflow and outputs the metric results
   |--train.py: training script that trains models
   |--visual.py: same working mechanism as test.py but viualizing the output instead of calculating the statistics 
   |--requirements.txt: python dependency libraries
CT2XRAY/: (Will be released soon.) Convert CT volume to X-Ray images that are used as the training input.
   |--
   |--
   |--
XRAY_TRANSFER/: (Will be released soon.) Cycle-gan based pipeline to make the synthesized X-Ray images more realistic. 
   |--
   |--
   |--
images/: markdown support images
LICENSE
README.md

Demo


Input Arguments

Test our Models

Please use the following example settings to test our model.

  1. Single-view Input Parameters for Test Script:
    python3 test.py --ymlpath=./experiment/singleview2500/d2_singleview2500.yml --gpu=0 --dataroot=./data/LIDC-HDF5-256 --dataset=test --tag=d2_singleview2500 --data=LIDC256 --dataset_class=align_ct_xray_std --model_class=SingleViewCTGAN --datasetfile=./data/test.txt --resultdir=./singleview --check_point=30 --how_many=3
  2. Multi-view Input Parameters for Test Script:
    python3 test.py --ymlpath=./experiment/multiview2500/d2_multiview2500.yml --gpu=0 --dataroot=./data/LIDC-HDF5-256 --dataset=test --tag=d2_multiview2500 --data=LIDC256 --dataset_class=align_ct_xray_views_std --model_class=MultiViewCTGAN --datasetfile=./data/test.txt --resultdir=./multiview --check_point=90 --how_many=3

Train from Scratch

Please use the following example settings to train your model.

  1. Single-view Input Parameters for Training Script:
    python3 train.py --ymlpath=./experiment/singleview2500/d2_singleview2500.yml --gpu=0,1,2,3 --dataroot=./data/LIDC-HDF5-256 --dataset=train --tag=d2_singleview2500 --data=LIDC256 --dataset_class=align_ct_xray_std --model_class=SingleViewCTGAN --datasetfile=./data/train.txt --valid_datasetfile=./data/test.txt --valid_dataset=test
  2. Multi-view Input Parameters for Training Script:
    python3 train.py --ymlpath=./experiment/multiview2500/d2_multiview2500.yml --gpu=0,1,2,3 --dataroot=./data/LIDC-HDF5-256 --dataset=train --tag=d2_multiview2500 --data=LIDC256 --dataset_class=align_ct_xray_views_std --model_class=MultiViewCTGAN --datasetfile=./data/train.txt --valid_datasetfile=./data/test.txt --valid_dataset=test

Results


Qualitative results from our original paper.

TODO


Acknowledgement


We thank the public LIDC-IDRI dataset that is used to build our algorithm.