a morph transfer UGATIT for image translation.
This is Pytorch implementation of UGATIT, paper "U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation".
Additionally, I DIY the model by adding two modules, a MLP module to learn a latent zone and an identity preserving loss. These two factors make UGATIT to achieve a progressive domain transfer for image translation. I call this method Morph UGATIT.
My work has two aspects:
I train model on two datasets, "adult2child" and "selfie2anime".
pip install Cmake
pip install Boost
pip install dlib
There are many models in my repo, but you just need two models and corresponding python script files.
The "selfie2anime" dataset comes from official UGATIT repo.
set configurations. configuration files can be found "configs" dir. You just focus on "cfgs_ugatit.py" and "cfgs_s_ugatit_plus.py". Please change:
start to train.
cd tool
python train_ugatit.py # ugatit
python train_s_ugatit_plus.py # morph ugatit
you can also use tensorboard to check loss curves and some visualizations.
Since dlib is necessary, you should download dlib model weight here. change "alignment_loc" at "tool/demo_xxxx.py". "xxx" means "ugatit" or "morph_ugatit" to your dlib model weight path. Then put a test image into a dir.
cd tool
python demo_ugatit.py --type ugatit --resume ${ckpt path}$ --input ${image dir}$ --saved-dir ${result location}$ --align
python demo_morph_ugatit.py --resume ${ckpt path}$ --input ${image dir}$ --saved-dir ${result location}$ --align
Note:
Here I provide my pretrained model weights.
for "adult2child" dataset
for "selfie2anime" dataset
More results can be seen here