This is the part of implementation for the "Learning image-to-image translation using paired and unpaired training samples" (https://arxiv.org/pdf/1805.03189.pdf). This paper is accepted in ACCV 2018.
Prerequisites
Visdom and dominate
Training
Testing
Then run: python test.py --dataroot ./datasets --model cycle_gan --dataset_mode unaligned --which_model_netG resnet_9blocks --which_direction AtoB --name mygan_70 --how_many 100
Training Tips:
For no unpaired data, set --super_epoch and --niter to same value. We have not included the VGG loss in the training script (Commented part). We will update this soon. For any help, please contact us at: soumya.tripathy@tuni.fi
If you are using this implementation for your research work then please cite us as:
#Citation
@article{tripathy+kannala+rahtu,
title={Learning image-to-image translation using paired and unpaired training samples},
author={Tripathy, Soumya and Kannala, Juho and Rahtu, Esa},
journal={arXiv preprint arXiv:1805.03189},
year={2018}
}
Related Work
1. Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio "Generative Adversarial Networks", in NIPS 2014.
2. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. "Image-to-Image Translation with Conditional Adversarial Networks", in CVPR 2017.
3. J. Y. Zhu, T. Park, P. Isola, and A. A. Efros. "Unpaired image-to-image translation using cycle-consistent adversarial networks",
NOTE: Code borrows heavily from pix2pix