mapooon / Face2Diffusion

[CVPR 2024] Face2Diffusion for Fast and Editable Face Personalization https://arxiv.org/abs/2403.05094
Other
77 stars 6 forks source link
diffusion-models image-editing stable-diffusion

Face2Diffusion (CVPR2024)

    **face2diffusion_demo**

Overview The official PyTorch implementation for the following paper:

Face2Diffusion for Fast and Editable Face Personalization,
Kaede Shiohara, Toshihiko Yamasaki,
CVPR 2024

Changelog

2024/03/28 πŸ”₯ Released a demo code on Google Colab.
2024/03/15 πŸ”₯ Released the full inference pipeline.
2024/03/11 πŸ”₯ Released this repository and demo code.

Demo on Google Colab

**face2diffusion_demo**

Recomended Development Environment

Setup

1. Build Docker Image

Build a docker image

docker build -t face2diffusion dockerfiles/

Execute docker using the following command (Please replace /path/to/this/repository with a proper one)

docker run -it --gpus all --shm-size 512G \
-v /path/to/this/repository:/workspace \
face2diffusion bash

Install some packages

bash install.sh

2. Download checkpoints

We provide checkpoints for mapping network and MSID encoder. Download and place them to checkpoints/.

Demo

Generate images for an facial image and text prompt:

python3 inference_f2d.py \
--w_map checkpoints/mapping.pt \
--w_msid checkpoints/msid.pt \
-i input/0.jpg \ # input identity
-p 'f l eating bread in front of the Eiffel Tower' \ # input prompt
-o output.png \ # output file name
-n 8 \ # num of images to generate

Note: The identifier S* should be represented as "f l".

Acknowledgements

We borrow some code from InsightFace, ICT, and Pix2Word.

Citation

If you find our work useful for your research, please consider citing our paper:

@inproceedings{shiohara2024face2diffusion,
  title={Face2Diffusion for Fast and Editable Face Personalization},
  author={Shiohara, Kaede and Yamasaki, Toshihiko},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2024}
}