zichongc / Semantic-face-editing

StyleGAN-based semantic face editing
MIT License
4 stars 1 forks source link
interfacegan stylegan2

StyleGAN-based Semantic Face Editing

This implementation is based on InterfaceGAN (CVPR 2020). In lieu of utilizing StyleGANv1 within InterfaceGAN, I have customized the codebase to align with StyleGANv2. This adaptation ensures compatibility and leverages the enhanced features offered by StyleGANv2.

Getting start

Download the model checkpoints from here. Place them to ./checkpoints.

Model checkpoints include e4e_ffhq_encode.pt, shape_predictor_68_face_landmarks.dat, stylegan2-ffhq-config-f.pt.

Requirements

I have tested on:

Steps for preparing attribute vector

Train an attribute classifier

Data preparation

python preparing.py --attr=[ATTRIBUTE] --n_samples=20000

Solve attribute vector

python solve.py --attr=[ATTRIBUTE] --code=w

Adjust the placeholders such as [ATTRIBUTE] according to your specific attribute names.

Testing on generated images

Single attribute manipulation only.

python manipulation.py --attr=[ATTRIBUTE]

Results will be saved to ./outputs.

Inference

Provide a facial image for semantic editing. Make sure checkpoints are well-prepared.

python inference.py --input=[IMAGE_PATH] --attr=[ATTRIBUTE] --alpha=3 --conditions=[ATTRIBURES(optinal)]

Results will be saved to ./outputs.

Acknowledgements

The StyleGANv2 is borrowed from this pytorch implementation by @rosinality. The implementation of e4e projection is also heavily from encoder4editing.