This repository contains the official implementation of the paper: "Prompted Contextual Transformer for Incomplete-View CT Reconstruction"
We build a robust and transferable network (named ProCT) that can reconstruct degraded CT images from a vast range of incomplete-view CT settings within a single model in one pass, by leveraging multi-setting synergy for training.
Promising computed tomography (CT) techniques for sparse-view and limited-angle scenarios can reduce the radiation dose, shorten the data acquisition time, and allow irregular and flexible scanning. Yet, these two scenarios involve multiple different settings that vary in view numbers or angular ranges, ultimately introducing complex artifacts to the reconstructed images. Existing CT reconstruction methods tackle these scenarios and/or settings in isolation, omitting their synergistic effects on each other for better robustness and transferability in clinical practice. In this paper, we frame these diverse settings as a unified incomplete-view CT problem, and propose a novel Prompted Contextual Transformer (ProCT) to harness the multi-setting synergy from these incomplete-view CT settings, thereby achieving more robust and transferable CT reconstruction. The novelties of ProCT lie in two folds. First, we devise projection view-aware prompting to provide setting-discriminative information, enabling a single ProCT to handle diverse settings. Second, we propose artifact-aware contextual learning to sense artifact pattern knowledge from in-context image pairs, making ProCT capable of accurately removing the complex, unseen artifacts. Extensive experimental results on two public clinical CT datasets demonstrate (i) superior performance of ProCT over state-of-the-art methods---including single-setting models---on a wide range of settings, (ii) strong transferability to unseen datasets and scenarios, and (iii) improved performance when integrating sinogram data.
We build our model based on torch-radon toolbox that provides highly-efficient and differentiable tomography transformations. There are official V1 repository and an unofficial but better-maintained V2 repository. V1 works for older pytorch/CUDA (torch<= 1.7., CUDA<=11.3), while V2 supports newer versions. Below is a walkthrough for installing this toolbox.
Step 1: build a new environment
conda create -n tr37 python==3.7
conda activate tr37
Step 2: set up basic pytorch
conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=11.0 -c pytorch
or
pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
Step 3: download toolbox code from https://github.com/matteo-ronchetti/torch-radon (master branch), cd
to the directory, and run the setup:
python setup.py install
Step 4: install other related packages
conda install -c astra-toolbox astra-toolbox
conda install matplotlib
pip install einops
pip install opencv-python
conda install pillow
conda install scikit-image
conda install scipy==1.6.0
conda install wandb
conda install tqdm
Step 1: build a new environment
conda create -n tr39 python==3.9
conda activate tr39
Step 2: set up basic pytorch
conda install pytorch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 pytorch-cuda=12.1 -c pytorch -c nvidia
Step 3: download toolbox code from https://github.com/carterbox/torch-radon and run the setup.py with the following:
python setup.py install
Step 4: install other related packages (same as above)
We use DeepLesion dataset and AAPM Mayo 2016 dataset in our experiments. DeepLesion dataset can be downloaded from here, and the AAPM dataset can be downloaded from Clinical Innovation Center (or the box link).
When finishing downloading these two datasets, please arrange the DeepLesion dataset as follows:
__path/to/your/deeplesion/data
|__000001_01_01
| |__103.png
| |__104.png
| |__...
|
|__000001_02_01
| |__008.png
| |__009.png
| |__...
|
|__...
and arrange the AAPM dataset as follows:
__path/to/your/aapm/data
|__L067_FD_1_1.CT.0001.0001.2015.12.22.18.09.40.840353.358074219.npy
|__L067_FD_1_1.CT.0001.0002.2015.12.22.18.09.40.840353.358074243.npy
|__...
Finally, replace the global variables (DEEPL_DIR
and AAPM_DIR
) in datasets/lowlevel_ct_dataset.py
with your own path/to/your/deeplesion/data
and path/to/your/aapm/data
!
Once the environments and datasets are ready, you can check the basic forwarding process of ProCT in ./demo.ipynb
. The checkpoint file is provided in the Releases page.
UPDATE. Considering that some users have trouble installing torch-radon package, we update some pre-computed in-context pairs in ./samples
as well as a simpler demo in ./demo_easy.ipynb
, where the code does not require torch-radon at all!
Once the environments and datasets are ready, you can train/test ProCT using scripts in train.sh
and test.sh
.
Big thanks to their great work for insights and open-sourcing!
If you find our work and code helpful, please kindly cite our paper :)
@article{ma2023proct,
title={Prompted Contextual Transformer for Incomplete-View CT Reconstruction},
author={Ma, Chenglong, and Li, Zilong and He, Junjun and Zhang, Junping and Zhang, Yi and Shan, Hongming},
journal={arXiv preprint arXiv:2312.07846},
year={2023}
}