This implementation is based on InterfaceGAN (CVPR 2020). In lieu of utilizing StyleGANv1 within InterfaceGAN, I have customized the codebase to align with StyleGANv2. This adaptation ensures compatibility and leverages the enhanced features offered by StyleGANv2.
Download the model checkpoints from here. Place them to ./checkpoints
.
Model checkpoints include e4e_ffhq_encode.pt
, shape_predictor_68_face_landmarks.dat
, stylegan2-ffhq-config-f.pt
.
I have tested on:
./checkpoints/classifiers
and name it [ATTRIBUTUE].pth
.python preparing.py --attr=[ATTRIBUTE] --n_samples=20000
python solve.py --attr=[ATTRIBUTE] --code=w
./checkpoints/attribute_vectors
.Adjust the placeholders such as [ATTRIBUTE] according to your specific attribute names.
Single attribute manipulation only.
python manipulation.py --attr=[ATTRIBUTE]
Results will be saved to ./outputs
.
Provide a facial image for semantic editing. Make sure checkpoints are well-prepared.
python inference.py --input=[IMAGE_PATH] --attr=[ATTRIBUTE] --alpha=3 --conditions=[ATTRIBURES(optinal)]
Results will be saved to ./outputs
.
The StyleGANv2 is borrowed from this pytorch implementation by @rosinality. The implementation of e4e projection is also heavily from encoder4editing.