XiangZ-0 / EVDI

Implementation of CVPR'22 paper "Unifying Motion Deblurring and Frame Interpolation with Events"
67 stars 5 forks source link

EVDI - Unifying Motion Deblurring and Frame Interpolation with Events (Academic Use Only)

Paper | Supp | Video

Slow shutter speed and long exposure time of frame-based cameras often cause visual blur and loss of inter-frame information, degenerating the overall quality of captured videos. To this end, we present a unified framework of event-based motion deblurring and frame interpolation for blurry video enhancement, where the extremely low latency of events is leveraged to alleviate motion blur and facilitate intermediate frame prediction. Specifically, the mapping relation between blurry frames and sharp latent images is first predicted by a learnable double integral network, and a fusion network is then proposed to refine the coarse results via utilizing the information from consecutive blurry inputs and the concurrent events. By exploring the mutual constraints among blurry frames, latent images, and event streams, we further propose a self-supervised learning framework to enable network training with real-world blurry videos and events.

Demo

10X (middle) and 100X (right) frame-rate results from one EVDI model.

[News]: Our work on self-supervised deblurring performance generalization is accepted by ICCV 2023 šŸŽ‰, welcome to check and star GEM if it interests you! šŸ˜†

Environment setup

You can create a new Anaconda environment as follows.

conda create -n evdi python=3.7
conda activate evdi

Clone this repository.

git clone git@github.com:XiangZ-0/EVDI.git

Install the above dependencies.

cd EVDI
pip install -r requirements.txt

Download model and data

Pretrained models and some example data can be downloaded via Google Drive.
In our paper, we conduct experiments on three types of data:

Quick start

Initialization

Test

Train

If you want to train your own model, please prepare the blurry images and events in the following directory structure (an example data is provided in './Database/Raw/' for reference):

<project root>
  |-- Database
  |     |-- Raw
  |     |     |-- Events.txt
  |     |     |-- Exposure_start.txt
  |     |     |-- Exposure_end.txt
  |     |     |-- Blur
  |     |     |     |-- 000000.png
  |     |     |     |-- 000001.png
  |     |     |     |-- ...

After arranging the raw data into the above structure, please pack them into training pairs by running

python Prepare_data.py --input_path=./Database/Raw/ --save_path=./Database/train/ --color_flag=0

Please set --color_flag=1 if you want to use color images. Finally, modify the parameters in 'Train.py' according to your need and run

python Train.py

Main Parameters:

Citation

If you find our work useful in your research, please cite:

@inproceedings{zhang2022unifying,
  title={Unifying Motion Deblurring and Frame Interpolation with Events},
  author={Zhang, Xiang and Yu, Lei},
  year={2022},
  booktitle={CVPR},
}