A Tensorflow implementation of AnimeGAN for fast photo animation ! 日本語
The paper can be accessed here or on the website.
Some suggestions:
News:
AnimeGANv2 has been released and can be accessed here.
The improvement directions of AnimeGANv2 mainly include the following 4 points:
1. Solve the problem of high-frequency artifacts in the generated image.
2. It is easy to train and directly achieve the effects in the paper.
3. Further reduce the number of parameters of the generator network.
4. Use new high-quality style data, which come from BD movies as much as possible.
e.g. python test.py --checkpoint_dir checkpoint/generator_Hayao_weight --test_dir dataset/test/real --style_name H
e.g. python video2anime.py --video video/input/お花見.mp4 --checkpoint_dir ./checkpoint/generator_Hayao_weight
e.g. python edge_smooth.py --dataset Hayao --img_size 256
e.g. python train.py --dataset Hayao --epoch 101 --init_epoch 5
e.g. python get_generator_ckpt.py --checkpoint_dir ../checkpoint/AnimeGAN_Hayao_lsgan_300_300_1_1_10 --style_name Hayao
:blush: pictures from the paper - AnimeGAN: a novel lightweight GAN for photo animation
:heart_eyes: Photo to Hayao Style
This repo is made freely available to academic and non-academic entities for non-commercial purposes such as academic research, teaching, scientific publications. Permission is granted to use the AnimeGAN given that you agree to my license terms. Regarding the request for commercial use, please contact us via email to help you obtain the authorization letter.
Xin Chen, Gang Liu, Jie Chen
This code is based on the CartoonGAN-Tensorflow and Anime-Sketch-Coloring-with-Swish-Gated-Residual-UNet. Thanks to the contributors of this project.