LonglongaaaGo / EXE-GAN

Facial image inpainting is a task of filling visually realistic and semantically meaningful contents for missing or masked pixels in a face image. This paper presents EXE-GAN, a novel diverse and interactive facial inpainting framework, which can not only preserve the high-quality visual effect of the whole image but also complete the face image with exemplar-like facial attributes.
MIT License
55 stars 3 forks source link
computer-vision image-editing image-inpainting image-processing

Do Inpainting Yourself: Generative Facial Inpainting Guided by Exemplars (EXE-GAN)

Official PyTorch implementation of EXE-GAN. [Homepage] [paper] [demo_youtube] [demo_bilibili]

We present EXE-GAN, a novel exemplar-guided facial inpainting framework using generative adversarial networks. Our approach can not only preserve the quality of the input facial image but also complete the image with exemplar-like facial attributes.

Performance

Notice

Our paper was first released on Sun, 13 Feb 2022. We are thankful for the community's recognition and attention to our project. We also recognized that there have been some great papers published after ours, and we encourage you to check out their projects as well:

Requirements

cd EXE-GAN project
pip install -r requirements.txt
What we have released

Training

Testing

Notice

- mask_root Irregular masks root
- mask_file_root file name list file folder
- mask_type could be ["center", "test_2.txt", "test_3.txt", "test_4.txt", "test_5.txt", "test_6.txt", "all"]

Exemplar-guided facial image recovery

Notice

(use our FFHQ_60k pre-trained model EXE_GAN_model.pt or trained *pt file by yourself.)

python guided_recovery.py --psp_checkpoint_path ./pre-train/psp_ffhq_encode.pt --ckpt ./checkpoint/EXE_GAN_model.pt --masked_dir ./imgs/exe_guided_recovery/mask --gt_dir ./imgs/exe_guided_recovery/target --exemplar_dir ./imgs/exe_guided_recovery/exemplar --sample_times 10 --eval_dir ./recover_out

- masked_dir: mask input folder
- gt_dir: the input gt_dir, used for  editing 
- exemplar_dir: exemplar_dir, the exemplar dir, for guiding the editing
- eval_dir: output dir
Ground-truth Masked
Ground-truth Masked
Ground-truth Masked
Ground-truth Masked
Ground-truth Mask Exemplar Inpainted
Ground-truth Masked
diversity 1 diversity 2 diversity 3 diversity 4

Exemplar guided style mixing

Notice

(use our FFHQ_60k pre-trained model EXE_GAN_model.pt or trained *pt file by yourself.)

python exemplar_style_mixing.py --psp_checkpoint_path ./pre-train/psp_ffhq_encode.pt --ckpt ./checkpoint/EXE_GAN_model.pt --masked_dir ./imgs/exe_guided_recovery/mask --gt_dir ./imgs/exe_guided_recovery/target --exemplar_dir ./imgs/exe_guided_recovery/exemplar --sample_times 2 --eval_dir mixing_out

- masked_dir: mask input folder
- gt_dir: the input gt_dir, used for  editing 
- exemplar_dir: exemplar_dir, the exemplar dir, for guiding the editing
- eval_dir: output dir
Ground-truth Masked
Ground-truth Mask Exemplar 1 Exemplar 2
Ground-truth Masked
Ground-truth Masked
Ground-truth Masked
Ground-truth Masked

Editing masks by yourself

gen_mask

We also uploaded the mask editing tool. You can try this tool to generate your masks for editing.

python mask_gui.py

Bibtex

Acknowledgements

Model details and custom CUDA kernel codes are from official repositories: https://github.com/NVlabs/stylegan2

Codes for Learned Perceptual Image Patch Similarity, LPIPS came from https://github.com/richzhang/PerceptualSimilarity

To match FID scores more closely to tensorflow official implementations, I have used FID Inception V3 implementations in https://github.com/mseitzer/pytorch-fid