JarrentWu1031 / CCPL

[ECCV 2022 Oral] Official Pytorch implementation of CCPL and SCTNet
Apache License 2.0
191 stars 26 forks source link
arbitrary-style-transfer artistic-style-transfer contrastive-learning contrastive-loss deep-learning eccv eccv-2022 eccv2022 image-to-image-translation photo-realistic-style-transfer pytorch style-transfer versatile-style-transfer video-style-transfer

CCPL: Contrastive Coherence Preserving Loss for Versatile Style Transfer (ECCV 2022 Oral)

Paper | Video Demo | Web Demo | Supp File

@inproceedings{wu2022ccpl,
  title={CCPL: Contrastive Coherence Preserving Loss for Versatile Style Transfer},
  author={Wu, Zijie and Zhu, Zhen and Du, Junping and Bai, Xiang},
  booktitle={European Conference on Computer Vision},
  pages={189--206},
  year={2022},
  organization={Springer}
}

animated

Requirements

This code is tested under Ubuntu 14.04 and 16.04. The total project can well function under the following environment:

or simply run:

pip install -r requirements.txt

Inspirations for CCPL

animated

Details of CCPL

animated

Artistic Style Transfer

Photo-realistic Style Transfer

Super-resolution PST

Short-term Temporal Consistency

Long-term Temporal Consistency

Image-to-image translation

animated

Preparations

Download vgg_normalized.pth and put them under models/. Download COCO2014 dataset (content dataset) and Wikiart dataset (style dataset)

Train

To train a model, use command like:

python train.py --content_dir <content_dir> --style_dir <style_dir> --log_dir <where to place logs> --save_dir <where to place the trained model> --training_mode <artistic or photo-realistic> --gpu <specify a gpu>

or:

sh scripts/train.sh

Test

To test a model, use commands like

python test.py --content input/content/lenna.jpg --style input/style/in2.jpg --decoder <decoder_dir> --SCT <SCT_dir> --testing_mode <artistic or photo-realistic>
python test_video_frame.py --content_dir <video frames dir> --style_path input/style/in2.jpg --decoder <decoder_dir> --SCT <SCT_dir> --testing_mode <artistic or photo-realistic> 

or:

sh scripts/test.sh
sh scripts/test_video_frame.sh

To be noted, test_video_frame.py receives video frames as content inputs.

For more details and parameters, please refer to --help option.

Pre-trained Models

To use the pre-trained models, please download here pre-trained models and specify them during training (These pre-trained models are trained under pytorch-1.9.1 and torchvision-0.10.1)

Acknowledgments

The code is based on project AdaIN and CUT. We sincerely thank them for their great work.