Yuxinn-J / Scenimefy

[ICCV 2023] Scenimefy: Learning to Craft Anime Scene via Semi-Supervised Image-to-Image Translation
https://yuxinn-j.github.io/projects/Scenimefy.html
Other
263 stars 17 forks source link
anime iccv2023 style-transfer stylegan2

Scenimefy: Learning to Craft Anime Scene via Semi-Supervised Image-to-Image Translation

Yuxin Jiang*, Liming Jiang*, Shuai Yang, Chen Change Loy
MMLab@NTU affiliated with S-Lab, Nanyang Technological University
In ICCV 2023.
:page_with_curl:[**Paper**](https://arxiv.org/abs/2308.12968) **|** :globe_with_meridians:[**Project Page**](https://yuxinn-j.github.io/projects/Scenimefy.html) **|** :open_file_folder:[**Anime Scene Dataset**](#open_file_folder-anime-scene-dataset) **|** 🤗[**Demo**](https://huggingface.co/spaces/YuxinJ/Scenimefy)

Updates

:wrench: Installation

  1. Clone this repo:
    git clone https://github.com/Yuxinn-J/Scenimefy.git
    cd Scenimefy
  2. Install dependent packages: After installing Anaconda, create a new Conda environment using conda env create -f Semi_translation/environment.yml.

:zap: Quick Inference

  1. Python script 2. Gradio demo

    Python script

    • Download pre-trained models: Shinkai_net_G.pth
      wget https://github.com/Yuxinn-J/Scenimefy/releases/download/v0.1.0/Shinkai_net_G.pth -P Semi_translation/pretrained_models/shinkai-test/

Gradio demo

:train: Quick I2I Train

Dataset Preparation

Training

Refer to the ./Semi_translation/script/train.sh file, or use the following command:

  python train.py --name exp_shinkai  --CUT_mode CUT --model semi_cut \ 
  --dataroot ./datasets/unpaired_s2a --paired_dataroot ./datasets/pair_s2a \ 
  --checkpoints_dir ./pretrained_models \
  --dce_idt --lambda_VGG -1  --lambda_NCE_s 0.05 \ 
  --use_curriculum  --gpu_ids 0

:checkered_flag: Start From Scratch

StyleGAN Finetuning [TODO]

:open_file_folder: Anime Scene Dataset

anime-dataset It is a high-quality anime scene dataset comprising 5,958 images with the following features:

In compliance with copyright regulations, we cannot directly release the anime images. However, you can conveniently prepare the dataset following instructions here.

:love_you_gesture: Citation

If you find this work useful for your research, please consider citing our paper:

@inproceedings{jiang2023scenimefy,
  title={Scenimefy: Learning to Craft Anime Scene via Semi-Supervised Image-to-Image Translation},
  author={Jiang, Yuxin and Jiang, Liming and Yang, Shuai and Loy, Chen Change},
  booktitle={ICCV},
  year={2023}
}

:hugs: Acknowledgments

Our code is mainly developed based on Cartoon-StyleGAN and Hneg_SRC. We thank facebook for their contribution of Mask2Former.

:newspaper_roll: License

Distributed under the S-Lab License. See LICENSE.md for more information.