π₯ For more results, visit our project page π₯
β If you found this project helpful to your projects, please help star this repo. Thanks! π€
requirements.txt
# git clone this repository. Don't forget to add --recursive!!
git clone --recursive https://github.com/jnjaby/KEEP
cd KEEP
conda create -n keep python=3.8 -y conda activate keep
pip3 install -r requirements.txt python basicsr/setup.py develop conda install -c conda-forge dlib # only for face detection or cropping with dlib conda install -c conda-forge ffmpeg
[Optional] If you forget to clone the repo with `--recursive`, you can update the submodule by
git submodule init git submodule update
## Quick Inference
### Download Pre-trained Models
All pretrained models can also be automatically downloaded during the first inference.
You can also download our pretrained models from [Releases V0.1.0](https://github.com/jnjaby/KEEP/releases/tag/v0.1.0) to the `weights` folder.
### Prepare Testing Data
We provide both synthetic (VFHQ) and real (collected) examples in `assets/examples` folder. If you would like to test your own face videos, place them in the same folder.
You can also download the full synthetic and real test data from [[Google Drive](https://drive.google.com/drive/folders/16yqGKQnjCzrdVK_SQSzFhULEfhSxMUH_?usp=sharing)].
### Inference
**[Note]** If you want to compare KEEP in your paper, please make sure the face alignment is consistent and run the following command with `--has_aligned` to indicate faces are already cropped and aligned. The results will be saved in the `results` folder.
π§π» Video Face Restoration for synthetic data (cropped and aligned face)
python inference_keep.py -i=./assets/examples/synthetic_1.mp4 -o=results/ --has_aligned --save_video -s=1
π¬ Video Face Restoration for real data (in the wild)
python inference_keep.py -i=./assets/examples/real_1.mp4 -o=results/ --draw_box --save_video -s=1 --bg_upsampler=realesrgan
## Citation
If you find our repo useful for your research, please consider citing our paper:
```bibtex
@InProceedings{feng2024keep,
title = {Kalman-Inspired FEaturE Propagation for Video Face Super-Resolution},
author = {Feng, Ruicheng and Li, Chongyi and Loy, Chen Change},
booktitle = {European Conference on Computer Vision (ECCV)},
year = {2024}
}
This project is open sourced under NTU S-Lab License 1.0. Redistribution and use should follow this license. The code framework is mainly modified from CodeFormer. Please refer to the original repo for more usage and documents.
If you have any question, please feel free to contact us via ruicheng002@ntu.edu.sg
.