Public pytorch implementation for our paper Scribble-based Domain Adaptation via Co-segmentation, which was accepted for presentation at MICCAI 2020.
If you find this code useful for your research, please cite the following paper:
@article{ScribDA2020Dorent,
author={Dorent, Reuben and Joutard, Samuel and
Shapey, Jonathan and Bisdas, Sotirios and
Kitchen, Neil and Bradford, Robert and Saeed, Shakeel and
Modat, Marc and Ourselin, S\'ebastien and Vercauteren, Tom},
title={Scribble-based Domain Adaptation via Co-segmentation},
journal={MICCAI},
year={2020},
}
The code is implemented in Python 3.6 using using the PyTorch library. Requirements:
Install all requirements using:
pip install -r requirements.txt
cd ./Permutohedral_attention_module/PAM_cuda/
python3 setup.py build
python3 setup.py install
The dataset (images, annotations, scribbles) used for our experiments will be released soon.
train.py
is the main file for training the model.
python3 train.py \
-model_dir ./models/$ALPHA/$BETA/$GAMMA/$WARMUP/ \
-alpha $ALPHA \
-beta $BETA \
-gamma $GAMMA \
-warmup $WARMUP \
-path_source $SOURCE \
-path_target $TARGET \
-dataset_split_source $SPLIT_SOURCE \
-dataset_split_target $SPLIT_TARGET \
Where the hyperparameters are defined such as:
$ALPHA
: Image specific spatial kernel for smoothing differences in coordinates. Typical value: 15.$BETA
: Image specific spatial kernel for smoothing differences in intensities. Typical value: 0.05$GAMMA
: Cross-domain spatial kernel for smoothing differences in feature values. Typical value: 0.1$WARMUP
: Define the number of initialisation epochs without the cross-modality regularisation.$SOURCE
: Path to the source data$TARGET
: Path to the target data$SPLIT_SOURCE
: CSV file with source image IDs and group ('inference' or 'training')$SPLIT_TARGET
: CSV file with target image IDs and group ('inference' or 'training')If you want to use your own data, you just need to change the source and target paths, the splits and potentially the modality used.