The official PyTorch implementation for the following paper:
Face2Diffusion for Fast and Editable Face Personalization,
Kaede Shiohara, Toshihiko Yamasaki,
CVPR 2024
2024/03/28 π₯ Released a demo code on Google Colab.
2024/03/15 π₯ Released the full inference pipeline.
2024/03/11 π₯ Released this repository and demo code.
Build a docker image
docker build -t face2diffusion dockerfiles/
Execute docker using the following command (Please replace /path/to/this/repository
with a proper one)
docker run -it --gpus all --shm-size 512G \
-v /path/to/this/repository:/workspace \
face2diffusion bash
Install some packages
bash install.sh
We provide checkpoints for mapping network and MSID encoder. Download and place them to checkpoints/
.
Generate images for an facial image and text prompt:
python3 inference_f2d.py \
--w_map checkpoints/mapping.pt \
--w_msid checkpoints/msid.pt \
-i input/0.jpg \ # input identity
-p 'f l eating bread in front of the Eiffel Tower' \ # input prompt
-o output.png \ # output file name
-n 8 \ # num of images to generate
Note: The identifier S* should be represented as "f l".
We borrow some code from InsightFace, ICT, and Pix2Word.
If you find our work useful for your research, please consider citing our paper:
@inproceedings{shiohara2024face2diffusion,
title={Face2Diffusion for Fast and Editable Face Personalization},
author={Shiohara, Kaede and Yamasaki, Toshihiko},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2024}
}