Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, Honglak Lee
This is the code for our ICML 2016 paper on text-to-image synthesis using conditional GANs. You can use it to train and sample from text-to-image models. The code is adapted from the excellent dcgan.torch.
You will need to install Torch, CuDNN, and the display package.
CONFIG
file to point to your data and text encoder paths../scripts/train_cub.sh
./scripts/demo_flowers.sh
. Add text descriptions to scripts/flowers_queries.txt
../scripts/demo_cub.sh
../scripts/demo_coco.sh
. ./scripts/train_coco_txt.sh
.If you find this useful, please cite our work as follows:
@inproceedings{reed2016generative,
title={Generative Adversarial Text-to-Image Synthesis},
author={Scott Reed and Zeynep Akata and Xinchen Yan and Lajanugen Logeswaran and Bernt Schiele and Honglak Lee},
booktitle={Proceedings of The 33rd International Conference on Machine Learning},
year={2016}
}