Paper (Arxiv) | Supplementary Material | [Project Page]()
I have trained and tested the codes on
pip install -r requirements.txt
Download Our Pretrain Models and Test Dataset. Additionally, we offer our FSR results in orginal paper.
# On CelebA Test set
python test.py --gpus 1 --model wfen --name wfen \
--load_size 128 --dataset_name single --dataroot /path/to/datasets/test_datasets/CelebA1000/LR_x8_up/ \
--pretrain_model_path ./pretrain_models/wfen/wfen_best.pth \
--save_as_dir results_celeba/wfen
# On Helen Test set
python test.py --gpus 1 --model wfen --name wfen \
--load_size 128 --dataset_name single --dataroot /path/to/datasets/test_datasets/Helen50/LR_x8_up/ \
--pretrain_model_path ./pretrain_models/wfen/wfen_best.pth \
--save_as_dir results_helen/wfen
We provide evaluation codes in script test.sh
for calculate PSNR/SSIM/LPIPS/VIF/Parmas/FLOPs scores.
The commands used to train the released models are provided in script train.sh
. Here are some train tips:
--dataroot
to the path where your training images are stored. --name
option for different experiments. Tensorboard records with the same name will be moved to check_points/log_archive
, and the weight directory will only store weight history of latest experiment with the same name.--batch_size
.--gpus
specify number of GPUs used to train. The script will use GPUs with more available memory first. To specify the GPU index, uncomment the export CUDA_VISIBLE_DEVICES=
.# Train Code
CUDA_VISIBLE_DEVICES=0,1 python train.py --gpus 2 --name wfen --model wfen \
--Gnorm "bn" --lr 0.0002 --beta1 0.9 --scale_factor 8 --load_size 128 \
--dataroot /path/to/datasets/CelebA --dataset_name celeba --batch_size 32 --total_epochs 150 \
--visual_freq 100 --print_freq 10 --save_latest_freq 500
This code is built on Face-SPARNet. We thank the authors for sharing their codes.
If you have any question, please email lewj2408@gmail.com
or cswjli@bupt.edu.cn