[NeurIPS 2024] SHMT: Self-supervised Hierarchical Makeup Transfer via Latent Diffusion Models.
Work done during internship of Zhaoyang Sun at DAMO Academy, Alibaba Group.
This is just a preliminary code collation and cannot be guaranteed to be completely correct. There may be some errors. I've been looking for a job lately and have been too busy. I will resubmit the organised code and pre-trained models once the job search is over.
A suitable conda environment named ldm
can be created
and activated with:
conda env create -f environment.yaml
conda activate ldm
Download a pretrained autoencoding models from LDM, VQ-f4 is selected in our experiment.
Data preparation. Prepare data according to the following catalogue structure.
/MakeupData/train/
├── images # original images
│ ├── 00001.jpg
│ ├── 00002.jpg
│ ├── ...
├── segs # face parsing
│ ├── 00001.jpg
│ ├── 00002.jpg
│ ├── ...
├── 3d # 3d
│ ├── 00001_3d.jpg
│ ├── 00002_3d.jpg
│ ├── ...
Change the pre-trained model path and data path of the configuration file to your own. The configuration file is in './configs/latent-diffusion/'
Execution of training scripts.
CUDA_VISIBLE_DEVICES=0 python main.py --base configs/latent-diffusion/shmt_h0.yaml -t --gpus 0,
CUDA_VISIBLE_DEVICES=0 python main.py --base configs/latent-diffusion/shmt_h4.yaml -t --gpus 0,
/MakeupData/test/
├── images # original images
│ ├── non_makeup
│ │ ├── 00001.jpg
│ │ ├── 00002.jpg
│ │ ├── ...
│ ├── makeup
│ │ ├── 00001.jpg
│ │ ├── 00002.jpg
│ │ ├── ...
├── segs # original images
│ ├── non_makeup
│ │ ├── 00001.jpg
│ │ ├── 00002.jpg
│ │ ├── ...
│ ├── makeup
│ │ ├── 00001.jpg
│ │ ├── 00002.jpg
│ │ ├── ...
├── 3d # only the 3d image of non_makeup
│ ├── non_makeup
│ │ ├── 00001.jpg
│ │ ├── 00002.jpg
│ │ ├── ...
CUDA_VISIBLE_DEVICES=0 python makeup_inference_h0.py \
--outdir your_output_dir \
--config configs/latent-diffusion/shmt_h0.yaml \
--ckpt your_ckpt_path \
--source_image_path your_non_makeup_images_path \
--source_seg_path your_non_makeup_segs_path \
--source_depth_path your_non_makeup_3d_path \
--ref_image_path your_makeup_images_path \
--ref_seg_path your_makeup_segs_path \
--seed 321 \
--ddim_steps 50
CUDA_VISIBLE_DEVICES=0 python makeup_inference_h4.py \
--outdir your_output_dir \
--config configs/latent-diffusion/shmt_h4.yaml \
--ckpt your_ckpt_path \
--source_image_path your_non_makeup_images_path \
--source_seg_path your_non_makeup_segs_path \
--source_depth_path your_non_makeup_3d_path \
--ref_image_path your_makeup_images_path \
--ref_seg_path your_makeup_segs_path \
--seed 321 \
--ddim_steps 50
If this work is helpful for your research, please consider citing the following BibTeX entry.
@article{sun2024shmt,
title={SHMT: Self-supervised Hierarchical Makeup Transfer via Latent Diffusion Models},
author={Sun, Zhaoyang and Xiong, Shengwu and Chen, Yaxiong and Du, Fei and Chen, Weihua, and Wang, Fang and Rong, Yi}
journal={Advances in neural information processing systems},
year={2024}
}
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.