TensorFlow implementation of Image-to-Image Translation Using Conditional Adversarial Networks that learns a mapping from input images to output images.
Here are some results generated by the authors of paper:
git clone git@github.com:yenchenlin/pix2pix-tensorflow.git
cd pix2pix-tensorflow
bash ./download_dataset.sh facades
python main.py --phase train
python main.py --phase test
Here is the results generated from this implementation:
Facades:
More results on other datasets coming soon!
Note: To avoid the fast convergence of D (discriminator) network, G (generator) network is updated twice for each D network update, which differs from original paper but same as DCGAN-tensorflow, which this project based on.
Code currently supports CMP Facades dataset. To reproduce results presented above, it takes 200 epochs of training. Exact computing time depends on own hardware conditions.
Test the model on validation set of CMP Facades dataset. It will generate synthesized images provided corresponding labels under directory ./test
.
Code borrows heavily from pix2pix and DCGAN-tensorflow. Thanks for their excellent work!
MIT