Airway segmentation from chest CTs using deep Convolutional Neural Networks
Contact: Antonio Garcia-Uceda Juarez (antonio.garciauceda89@gmail.com)
This software provides functionality to segment airways from CT scans, using deep CNN models, and in particular the U-Net. The implementation of the segmentation method is described in:
If using this software influences positively your project, please cite the above paper.
This software includes tools to i) prepare the CT data to use with DL models, ii) perform DL experiments for training and testing, and iii) process the output of DL models to obtain the binary airway segmentation. The tools are entirely implemented in Python, and both Pytorch and Keras/tensorflow libraries can be used to run DL experiments.
├── LICENSE
├── Makefile <- Makefile with commands like `make data` or `make train`
├── README.md <- The top-level README for developers using this project
│
├── docs <- A default Sphinx project; see sphinx-doc.org for details
│
├── models <- Trained and serialized models, model predictions, or model summaries
│
├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
│ generated with `pip freeze > requirements.txt`
│
├── scripts_launch <- Scripts with pipelines and PBS scripts to run in clusters
│
├── setup.py <- makes project pip installable (pip install -e .) so src can be imported
├── src <- Source code for use in this project.
│ │
│ ├── common <- General files and utilities
│ ├── dataloaders <- Modules to load data and batch generators
│ ├── imageoperators <- Various image operations
│ ├── models <- All modules to define networks, metrics and optimizers
│ ├── plotting <- Various plotting modules
│ ├── postprocessing <- Modules to postprocess the output of networks
│ ├── preprocessing <- Modules to preprocess the images to feed to networks
│ │
│ ├── scripts_evalresults <- Scripts to evaluate results from models
│ ├── scripts_experiments <- Scripts to train and test models
│ ├── scripts_preparedata <- Scripts to prepare data to train models
│ └── scripts_util <- Scripts for various utilities
│
├── tests <- Tests to validate the method implementation (to be run locally)
└── tox.ini <- tox file with settings for running tox; see tox.readthedocs.io
Project structure based on the cookiecutter data science project template. #cookiecutterdatascience
(Recommended to use python virtualenv)
Before running the scripts, the user needs to prepare the data directory with the following structure:
├── Images <- Store CT scans (in dicom or nifti format)
├── Airways <- Store reference airway segmentations
├── Lungs (optional) <- Store lung segmentations
│ (used in options i) mask ground-truth to ROI, and ii) crop images)
└── CoarseAirways (optional)<- Store segmentation of trachea and main bronchi
(used in option to attach trachea and main bronchi to predictions)
The user needs to prepare the working directory in the desired location, as follows:
The user needs only to run the scripts in the directories: "scripts_evalresults", "scripts_experiments", "scripts_launch", "scripts_preparedata", "scripts_util". Each script performs a separate and well-defined operation, either to i) prepare data, ii) run experiments, or iii) evaluate results.
The scripts are called in the command line as follows:
python
python
For optional arguments not indicated in the command line, they take the default values in the source file: "
(IMPORTANT): set the variable PYTHONPATH with the path of this code as follows:
1. From the data directory above, create the working data used for training / testing:
Several preprocessing operations can be applied in this script:
IF use option to crop images: compute the bounding-boxes of the lung masks, prior to the script above:
1. Distribute the working data in training / validation / testing:
2. Launch a training experiment:
OR restart a previous training experiment:
1. Compute probability maps from a trained model:
The output prob. maps have the format and dimensions as the working data used for testing, which is typically different from that of the original data (if using options above for preprocessing in the script "prepare_data.py").
2. Compute probability maps in format and dimensions of original data:
3. Compute binary mask of airways from probability maps:
4. (IF NEEDED) Compute the largest connected component of the airway binary masks:
python
rm -r
5. Compute airway centrelines from airway binary masks:
6. Compute the desired metrics from the results:
The user can apply various operations to input images / masks, such as i) binarise masks, ii) mask images to a mask, iii) rescale images... as follows:
Some operations require extra input arguments. To visualize the list of operations available and the required input arguments, include "--help" after the script.
We provide a trained U-Net model with this software, that we used for evaluation on the public EXACT'09 dataset. You can use this model to compute airway segmentations on your own CT data. To do this:
Prepare a folder with your own data, following the steps above in "Prepare Data Directory" ("Airways" are not needed)
Prepare a working directory, following the steps above in "Prepare Working Directory". Copy there the folder "models" from this repo
Run script: "bash models/run_model_trained.sh
We also provide a trained model using Tf-Keras instead of Pytorch. To use this one:
Set "TYPE_DNNLIB_USED == 'Keras' " in the source file "
Repeat the steps above, but with flag '--keras' instead of '--torch' in step 3)
We also provide a docker image with which you can evaluate the trained model on your own CT data within a docker container. To do this:
Prepare a folder with your own data
Pull our pre-built docker image: "sudo docker pull antonioguj/bronchinet:stable_torch"
Run script: "bash run_docker_models.sh