HiFaceGAN: Face Renovation via Collaborative Suppression and Replenishment
Lingbo Yang, Chang Liu, Pan Wang, Shanshe Wang, Peiran Ren, Siwei Ma, Wen Gao
Download FFHQ, resize to 512x512 and split id [65000, 70000)
for testing. We only use first 10000 images for training, which takes 2~3 days on a P100 GPU, training with full FFHQ is possible, but could take weeks.
After that, run degrade.py
to acquire paired images for training. You need to specify the degradation type and input root in the script first.
The configurations is stored in options/config_hifacegan.py
, the options should be self-explanatory, but feel free to leave an issue anytime.
python train.py # A fool-proof training script
python test.py # Test on synthetic dataset
python test_nogt.py # Test on real-world images
python two_source_test.py # Visualization of Fig 5
Download, unzip and put under ./checkpoints
. Then change names in configuration file accordingly.
BaiduNetDisk: Extraction code:cxp0
degrade.py
, don't expect them to handle real-world LQ face images. You can try to fine-tune them with additional collected samples though. face_renov
checkpoints trained under different degradation mixtures. Unfortunately I've forgot which one I used for our paper, so just try both and select the better one. Also, this could give you a hint about how our model behaves under a different degradation setting:) netG=lipspade
and ngf=48
inside the configuration file. In case of loading failure, don't hesitate to submit a issue or email me.Please find in metrics_package
folder:
main.py
: GPU-based PSNR, SSIM, MS-SSIM, FIDface_dist.py
: CPU-based face embedding distance(FED) and landmark localization error (LLE). PerceptualSimilarity\main.py
: GPU-based LPIPSniqe\niqe.py
: NIQE, CPU-based, no referenceNote:
/
in the end), the results will be displayed on screen and saved in txt.main.py
. If this is too heavy for you, reducebs=250
at line 79face_dist.py
script runs with 8 parallel subprocesses, which could cause error on certain environments. In that case, just disable the multiprocessing and replace with a for loop (This would take 2~3 hours for 5k images, you may want to wrap the loop in tqdm to reduce your anxiety).Please refer to benchmark.md for benchmark experimental settings and performance comparison.
Memory Cost The default model is designed to fit in a P100 card with 16 GB memory. For Titan-X or 1080Ti card with 12 GB memory, you can reduce ngf=48
, or further turn batchSize=1
without significant performance drop.
Inference Speed Currently the inference script is single-threaded which runs at 5fps. To further increase the inference speed, possible options are using multi-thread dataloader, batch inference, and combine normalization and convolution operations.
Copyright © 2020, Alibaba Group. All rights reserved. This code is intended for academic and educational use only, any commercial usage without authorization is strictly prohibited.
Please kindly cite our paper when using this project for your research.
@article{Yang2020HiFaceGANFR,
title={HiFaceGAN: Face Renovation via Collaborative Suppression and Replenishment},
author={Lingbo Yang and C. Liu and P. Wang and Shanshe Wang and P. Ren and Siwei Ma and W. Gao},
journal={Proceedings of the 28th ACM International Conference on Multimedia},
year={2020}
}
The replenishment module borrows the implementation of SPADE.