Pytorch implementation of Thesis project entitled "Photo-to-Emoji Transformation with TraVeLGAN and Perceptual Loss" (or in Chinese, "基於TraVeLGAN與Perceptual Loss實現照⽚轉換表情符號之應⽤")
Steps:
Download all of the files and folders in this repo and prepare the dataset. In my project, in this project we used CelebA dataset and Bitmoji dataset run python create_emojis.py
and set the number of bitmoji images on the num_emojis
variable.
Put the training CelebA dataset inside dataset/CelebA/trainA/
folder, and test CelebA dataset inside dataset/CelebA/test
.
Put all the Bitmoji dataset inside dataset/Bitmoji
folder.
Set up the config file inside configs/cifar.json
. Generally, You can determine the number of epochs, n_save_steps, and batch_size. I use batch_size=32
for faster converged.
Run program using command
python train.py --log log_photo2emoji --project_name photo2emoji
Steps:
Change the saved_model
key in config.json
to be ./log_photo2emoji/model_500.pt
or whenever number of iteration model you use.
run program using command
python testAtoB.py --project_name photo2emoji --log log_photo2emoji
NB: You could download the pretrained model from this link OneDrive Link, and place it in log_photo2emoji
folder
The following shows basic folder structure.
├── configs # config.json folder
├── dataset
│ ├── CelebA # Domain A (not included in this repo)
│ │ ├── trainA
│ │ └── trainA_pair # edge-promoting results of CelebA to be saved here
│ |
│ |── Bitmoji # Domain B (not included in this repo)
│ | ├── trainB
| | └── trainB_pair # edge-promoting results of Bitmoji to be saved here
| |
| |── bitmoji_api_info.md
| |── create_emojis.py
| └── create_emojis_parallel.py
|
├── networks
| └── default.py # the Generator, Discriminator, Siamese network
|
├── photo2emoji # will be created using --project_name photo2emoji command
├── log_photo2emoji
| └── model_500.pt # download this file (link at Pretrained Section)
|
├── samples # result samples folder
├── edge_promoting.py
├── losses.py # loss functions code
├── testAtoB.py # test code
├── train.py
├── trainer.py
└── utils.py
You can download the pretrained model (after 500 epochs) of this implementation in OneDrive Link
This implementation code is inspired by