PyTorch Code for arXiv "Semi-parametric Makeup Transfer via Semantic-aware Correspondence"
opt.dataroot=MT-Dataset
├── images
│ ├── makeup
│ └── non-makeup
├── parsing
│ ├── makeup
│ └── non-makeup
├── makeup.txt
├── non-makeup.txt
opt.dataroot
├── images
│ ├── makeup
│ └── non-makeup
├── parsing
│ ├── makeup
│ └── non-makeup
├── makeup_test.txt
├── non-makeup_test.txt
opt.dataroot
├── images
│ ├── makeup
│ └── non-makeup
├── makeup_test.txt
├── non-makeup_test.txt
Facial masks of an arbitrary image will be obtained from the face parsing model (we borrow the model from https://github.com/zllrunning/face-parsing.PyTorch)
python train.py --phase train
Check the file 'options/demo_options.py', change the corresponding cofigs if needed
Create folder '/checkpoints/makeup_transfer/'
Download the pre-trained model from Google Drive and put it into '/checkpoints/makeup_transfer/'
python demo.py --demo_mode normal
Notice:
Available demo mode: 'normal', 'interpolate', 'removal', 'multiple_refs', 'partly'
For part-specific makeup transfer(opt.demo_mode='partial'), make sure there are at least 3 reference images.
For interpolation between multiple references, make sure there are at least 4 reference images.
python demo_general.py --beyond_mt
Transfer different parts from different references