Maomao Li, Ge Yuan, [Cairong Wang](), [Zhian Liu](), Yong Zhang, Yongwei Nie, Jue Wang, Dong Xu
Our code is mainly based on python3.10+, pytorch2.0+, cuda12+, etc.
conda create -n e4s2023 python=3.10
conda activate e4s2023
pip install -r requirements.txt
export PYTHONPATH=$PWD
All the weights (including our E4S weights and other third_party weights) can be downloaded from here.
Please put all of them into ./pretrained
like this:
pretrained
βββ codeformer/
βββ E4S/
βββ face_blender/
βββ faceseg/
βββ faceVid2Vid/
βββ GPEN/
βββ inpainting/
βββ pixel2style2pixel/
βββ pose/
βββ SwinIR/
βββ zhian/
Run face swapping gradio web-ui demo on your machine locally:
git clone https://github.com/e4s2023/E4S2023.git
python gradio_swap.py
We follow STIT and AllInOneDeFliker to make the video face swapping results more stable, which is detailed in our [paper](). This repo only contains the PTI tuning step of STIT. We found that the PTI tuning step is sufficient to help the StyleGAN to generate stable video frames. This repo has not incorporated the AllInOneDeFliker code yet. You may visit their GitHub page for furthermore post-processing on PTI tuning frames.
Click https://e4s2023.github.io/ to see our video face swapping results.
@misc{liE4S
Author = {Maomao Li and Ge Yuan and Cairong Wang and Zhian Liu and Yong Zhang and Yongwei Nie and Jue Wang and Dong Xu},
Title = {E4S: Fine-Grained Face Swapping via Regional GAN Inversion},
Year = {2023},
Eprint = {arXiv:xxxx},
}
This repository borrows heavily from E4S(CVPR2023) and STIT. Thanks to the authors for sharing their code and models.
This is not an official product of Tencent. All the copyrights of the demo images and audio are from community users. Feel free to contact us if you would like remove them.