This repository is dedicated to enhancing the Super-Resolution (SR) inference process for Anime videos by fully harnessing the potential of your GPU. It is built upon the foundations of Real-CuGAN (https://github.com/bilibili/ailab/blob/main/Real-CUGAN/README_EN.md) and Real-ESRGAN (https://github.com/xinntao/Real-ESRGAN).
I've implemented the SR process using TensorRT, incorporating a custom frame division algorithm designed to accelerate it. This algorithm includes a video redundancy jump mechanism, akin to video compression Inter-Prediction, and a momentum mechanism.
Additionally, I've employed FFMPEG to decode the video at a reduced frames-per-second (FPS) rate, facilitating faster processing with an almost imperceptible drop in quality. To further optimize performance, I've utilized both multiprocessing and multithreading techniques to fully utilize all available computational resources.
For a more detailed understanding of the implementation and algorithms used, I invite you to refer to this presentation slide: https://docs.google.com/presentation/d/1Gxux9MdWxwpnT4nDZln8Ip_MeqalrkBesX34FVupm2A/edit#slide=id.p.
In my 3060Ti Desktop version, it can process in Real-Time on 480P Anime videos input (Real-CUGAN), which means that as soon as you finish watching one Anime video, the second Anime Super-Resolution (SR) video is already processed and ready for you to continue watching with just a simple click..
Currently, this repository supports Real-CUGAN (official) and a shallow Real-ESRGAN (6 blocks Anime Image version RRDB-Net provided by Real-ESRGAN).
\ My ultimate goal is to directly utilize decode information in Video Codec as in this paper (https://arxiv.org/abs/1603.08968), so I use the word "FAST" at the beginning. Though this repository can already process in real-time, this repository will be continuously maintained and developed.
If you like this repository, you can give me a star (if you are willing). Feel free to report any problem to me. \
Before:\
After 2X scaling:\ \
Skip step 3 and 4 if you don't want tensorrt, but they can increase the speed a lot & save a lot of GPU memory.
Install CUDA. The following is how I install:
gedit ~/.bashrc
// Add the following two at the end of the popped up file (The path may be different, please double check)
export PATH=/usr/local/cuda-12.0/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-12.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
// Save the file and execute the following in the terminal
source ~/.bashrc
Install CuDNN. The following is how I install:
cd /usr/local/cuda
sudo chmod 666 include
Install tensorrt
gedit ~/.bashrc
// Add the following at the end of the popped up file (The path may be different, please double check)
export LD_LIBRARY_PATH=/home/YOUR_USERNAME/TensorRT-8.6.1.6/lib${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
// Save the file and execute the following in the terminal
source ~/.bashrc
Install torch2trt (Don't directly use pip install torch2trt)
Install basic libraries for python
pip install -r requirements.txt
Skip step 3 and 4 if you don't want tensorrt, but they can increase the speed a lot & save a lot of GPU memory.
Install CUDA (e.g. https://developer.nvidia.com/cuda-downloads?)
Install Cudnn (move bin\ & include\ & lib\ so far is enough)
Install tensorrt (Don't directly use python install)
Install torch2trt (Don't directly use pip install torch2trt)
Install basic libraries for python
pip install -r requirements.txt
Adjust config.py to setup your setting. Usually, just editing Frequently Edited Setting part is enough. Please follow the instructions there.
Run
python main.py