microsoft / CoCosNet

Cross-domain Correspondence Learning for Exemplar-based Image Translation. (CVPR 2020 Oral)
MIT License
398 stars 45 forks source link
cocosnet computer-vision deep-learning gans generative-adversarial-network image-manipulation image-synthesis image-translation pytorch

Cross-domain Correspondence Learning for Exemplar-based Image Translation (CVPR 2020 oral, official Pytorch implementation)

Teaser

Project page | Paper | Video

Pan Zhang, Bo Zhang, Dong Chen, Lu Yuan, and Fang Wen.

Abstract

We present a general framework for exemplar-based image translation, which synthesizes a photo-realistic image from the input in a distinct domain (e.g., semantic segmentation mask, or edge map, or pose keypoints), given an exemplar image. The output has the style (e.g., color, texture) in consistency with the semantically corresponding objects in the exemplar. We propose to jointly learn the cross domain correspondence and the image translation, where both tasks facilitate each other and thus can be learned with weak supervision. The images from distinct domains are first aligned to an intermediate domain where dense correspondence is established. Then, the network synthesizes images based on the appearance of semantically corresponding patches in the exemplar. We demonstrate the effectiveness of our approach in several image translation tasks. Our method is superior to state-of-the-art methods in terms of image quality significantly, with the image style faithful to the exemplar with semantic consistency. Moreover, we show the utility of our method for several applications.

:sparkles: News

2022.12 We propose Paint by Example which allows in the wild image editing according to an examplar based on stable diffusion. One can have a try for our online demo.

2022.8 We recently propose PITI which is a SOTA image-to-image translation method based on prtrained diffusion model.

2021.5 We recently propose CoCosNet v2, which brings more stunning results for high-resolution images. Welcome to have a try.

Demo

Installation

Clone the Synchronized-BatchNorm-PyTorch repository.

cd models/networks/
git clone https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
cp -rf Synchronized-BatchNorm-PyTorch/sync_batchnorm .
cd ../../

Install dependencies:

pip install -r requirements.txt

Inference Using Pretrained Model

1) ADE20k (mask-to-image)

Download the pretrained model from here and save them in checkpoints/ade20k. Then run the command

python test.py --name ade20k --dataset_mode ade20k --dataroot ./imgs/ade20k --gpu_ids 0 --nThreads 0 --batchSize 6 --use_attention --maskmix --warp_mask_losstype direct --PONO --PONO_C

The results are saved in output/test/ade20k. If you don't want to use mask of exemplar image when testing, you can download model from here, save them in checkpoints/ade20k, and run

python test.py --name ade20k --dataset_mode ade20k --dataroot ./imgs/ade20k --gpu_ids 0 --nThreads 0 --batchSize 6 --use_attention --maskmix --noise_for_mask --warp_mask_losstype direct --PONO --PONO_C --which_epoch 90

2) Celebahq (mask-to-face)

Download the pretrained model from here, save them in checkpoints/celebahq, then run the command:

python test.py --name celebahq --dataset_mode celebahq --dataroot ./imgs/celebahq --gpu_ids 0 --nThreads 0 --batchSize 4 --use_attention --maskmix --warp_mask_losstype direct --PONO --PONO_C --warp_bilinear --adaptor_kernel 4

, then the results will be saved in output/test/celebahq.

3) Celebahq (edge-to-face)

Download the pretrained model from here, save them in checkpoints/celebahqedge, then run

python test.py --name celebahqedge --dataset_mode celebahqedge --dataroot ./imgs/celebahqedge --gpu_ids 0 --nThreads 0 --batchSize 4 --use_attention --maskmix --PONO --PONO_C --warp_bilinear --adaptor_kernel 4

the results will be stored in output/test/celebahqedge.

4) DeepFashion (pose-to-image)

Download the pretrained model from here, save them in checkpoints/deepfashion, then run the following command:

python test.py --name deepfashion --dataset_mode deepfashion --dataroot ./imgs/DeepFashion --gpu_ids 0 --nThreads 0 --batchSize 4 --use_attention --PONO --PONO_C --warp_bilinear --no_flip --warp_patch --video_like --adaptor_kernel 4

and the results are saved in output/test/deepfashion.

Training

Pretrained VGG model Download from here, move it to models/. This model is used to calculate training loss.

1) ADE20k (mask-to-image)

2) Celebahq (mask-to-face)

3) Celebahq (edge-to-face)

4) DeepFashion (pose-to-image)

Citation

If you use this code for your research, please cite our papers.

@inproceedings{zhang2020cross,
  title={Cross-domain Correspondence Learning for Exemplar-based Image Translation},
  author={Zhang, Pan and Zhang, Bo and Chen, Dong and Yuan, Lu and Wen, Fang},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={5143--5153},
  year={2020}
}

Also, welcome to refer to our CoCosNet v2:

@InProceedings{Zhou_2021_CVPR,
author={Zhou, Xingran and Zhang, Bo and Zhang, Ting and Zhang, Pan and Bao, Jianmin and Chen, Dong and Zhang, Zhongfei and Wen, Fang},
title={CoCosNet v2: Full-Resolution Correspondence Learning for Image Translation},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2021},
pages={11465-11475}
}

Acknowledgments

This code borrows heavily from SPADE. We also thank Jiayuan Mao for his Synchronized Batch Normalization code.