By Haofu Liao (liaohaofu@gmail.com), Spring, 2019
If you use this code for your research, please cite our paper.
@inproceedings{adn2019_miccai,
title={Artifact Disentanglement Network for Unsupervised Metal Artifact Reduction},
author={Haofu Liao, Wei-An Lin, Jianbo Yuan, S. Kevin Zhou, Jiebo Luo},
booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI)},
year={2019}
}
@article{adn2019_tmi,
author={H. {Liao} and W. {Lin} and S. K. {Zhou} and J. {Luo}},
journal={IEEE Transactions on Medical Imaging},
title={ADN: Artifact Disentanglement Network for Unsupervised Metal Artifact Reduction},
year={2019},
doi={10.1109/TMI.2019.2933425}
}
This repository is tested under the following system settings:
For most of the users, you may consider install ADN locally on your machine with the following steps.
git clone https://github.com/liaohaofu/adn.git
pip install -r requirements.txt
For Docker users, we provide a pre-built docker image as well as a Dockerfile.
docker pull liaohaofu/adn
docker/Dockerfile
and then build a docker image.cd docker/
docker build -t liaohaofu/adn .
docker run -it --runtime=nvidia liaohaofu/adn
Two publicly available datasets (DeepLesion and Spineweb) are supported. As a courtesy, we also support training/testing with natural images.
batch_download_zips.py
provided by DeepLesion to batch download the .zip files at once.path_to_DeepLesion/Images_png/
. Here path_to_DeepLesion
is the folder path where you extract the .zip files.path_to_DeepLesion/Images_png
to the actual path in your system before running the following command.ln -s path_to_DeepLesion/Images_png data/deep_lesion/raw
config/dataset.yaml
.>> prepare_deep_lesion
spine-1.zip
, spine-2.zip
, etc.). All the extracted images will be under the folder path_to_Spineweb/spine-*/
. Here path_to_Spineweb
is the folder path where you extract the spine-*.zip files.path_to_Spineweb/
to the actual path to in your system before running the following command.mkdir data/spineweb
ln -s path_to_Spineweb/ data/spineweb/raw
config/dataset.yaml
.python prepare_spineweb.py
your_dataset
├── test
│ ├── artifact # a folder containing all the testing images with artifact
│ └── no_artifact # a folder containing all the testing images without artifact
└── train
├── artifact # a folder containing all the training images with artifact
└── no_artifact # a folder containing all the training images without artifact
ln -s path_to_your_dataset data/nature_image
samples/
and the outputs of the demo can be found at results/
. To run the demo,python demo.py deep_lesion
python demo.py spineweb
[Optional] By default, the demo code will download pretrained models from google drive automatically. If the downloading fails, you may download them from google drive manually.
runs/
.mv path_to_DeepLesion_model runs/deep_lesion/deep_lesion_49.pt
mv path_to_Spineweb_model runs/spineweb/spineweb_39.pt
Configure the training and testing. We use a two-stage configuration for ADN, one for the default settings and the other for the run settings.
config/adn.yaml
which is not subject to be changed. When users do not provide the values for a specific setting, the default setting in this file will be used.runs/adn.yaml
. This is where the users provide specific settings for ADN's training and testing. Any provided settings in this file will override the default settings during the experiments. By default, the settings for training and testing ADN with DeepLesion, Spineweb and natural image datasets are provided in runs/adn.yaml
.Train ADN with DeepLesion, Spineweb datasets or a natural image dataset. The training results (model checkpoints, configs, losses, training visualizations, etc.) can be found under runs/run_name/
where run_name
can be either deep_lesion
, spineweb
or nature_image
python train.py deep_lesion
python train.py spineweb
python train.py nature_image
runs/run_name/
where run_name
can be either deep_lesion
, spineweb
or nature_image
.python test.py deep_lesion
python test.py spineweb
python test.py nature_image
The authors would like to thank Dr. Yanbo Zhang (yanbozhang007@gmail.com) and Dr. Hengyong Yu (hengyong_yu@uml.edu) for providing the artifact synthesis code used in this repository.