This repository contains the TensorFlow code for our NeurIPS 2018 paper “Unsupervised Attention-guided Image-to-Image Translation”. This code is based on the TensorFlow implementation of CycleGAN provided by Harry Yang. You may need to train several times as the quality of the results are sensitive to the initialization.
By leveraging attention, our architecture (shown in the figure bellow) only maps relevant areas of the image, and by doing so, further enhances the quality of image to image translation.
Our model architecture is defined as depicted below, please refer to the paper for more details:
The figure bellow displays automatically learned attention maps on various translation datasets:
Top row in the figure below are input images and bottom row are the mappings produced by our algorithm.
Top row in the figure below are input images and bottom row are the mappings produced by our algorithm.
Top row in the figure below are input images and bottom row are the mappings produced by our algorithm.
Top row in the figure below are input images and bottom row are the mappings produced by our algorithm.
You can either download one of the defaults CycleGAN datasets or use your own dataset.
bash ./download_datasets.sh horse2zebra
Create the csv file as input to the data loader.
Edit the cyclegan_datasets.py
file. For example, if you have a horse2zebra_train dataset which contains 1067 horse images and 1334 zebra images (both in JPG format), you can just edit the cyclegan_datasets.py
as following:
DATASET_TO_SIZES = {
'horse2zebra_train': 1334
}
PATH_TO_CSV = {
'horse2zebra_train': './AGGAN/input/horse2zebra/horse2zebra_train.csv'
}
DATASET_TO_IMAGETYPE = {
'horse2zebra_train': '.jpg'
}
python -m create_cyclegan_dataset --image_path_a='./input/horse2zebra/trainB' --image_path_b='./input/horse2zebra/trainA' --dataset_name="horse2zebra_train" --do_shuffle=0
Create the configuration file. The configuration file contains basic information for training/testing. An example of the configuration file could be found at configs/exp_01.json
.
Start training:
python main.py --to_train=1 --log_dir=./output/AGGAN/exp_01 --config_filename=./configs/exp_01.json
Check the intermediate results:
tensorboard --port=6006 --logdir=./output/AGGAN/exp_01/#timestamp#
python main.py --to_train=2 --log_dir=./output/AGGAN/exp_01 --config_filename=./configs/exp_01.json --checkpoint_dir=./output/AGGAN/exp_01/#timestamp#
python -m create_cyclegan_dataset --image_path_a='./input/horse2zebra/testB' --image_path_b='./input/horse2zebra/testA' --dataset_name="horse2zebra_test" --do_shuffle=0
python main.py --to_train=0 --log_dir=./output/AGGAN/exp_01 --config_filename=./configs/exp_01_test.json --checkpoint_dir=./output/AGGAN/exp_01/#old_timestamp#