XiangZ-0 / GEM

Pytorch implementation of ICCV'23 paper "Generalizing Event-based Motion Deblurring in Real-World Scenarios"
Apache License 2.0
26 stars 2 forks source link

GEM - Generalizing Event-Based Motion Deblurring in Real-World Scenarios

Paper | Supp | Video

Event-based motion deblurring has shown promising results by exploiting low-latency events. However, current approaches are limited in their practical usage, as they assume the same spatial resolution of inputs and specific blurriness distributions. This work addresses these limitations and aims to generalize the performance of event-based deblurring in real-world scenarios. We propose a scale-aware network that allows flexible input spatial scales and enables learning from different temporal scales of motion blur. A two-stage self-supervised learning scheme is then developed to fit real-world data distribution. By utilizing the relativity of blurriness, our approach efficiently ensures the restored brightness and structure of latent images and further generalizes deblurring performance to handle varying spatial and temporal scales of motion blur in a self-distillation manner. Our method is extensively evaluated, demonstrating remarkable performance, and we also introduce a real-world dataset consisting of multi-scale blurry frames and events to facilitate research in event-based deblurring.

Environment setup

You can create a new Anaconda environment as follows.

conda create -n gem python=3.7
conda activate gem

Clone this repository.

git clone git@github.com:XiangZ-0/GEM.git

Install the above dependencies and Deformable Convolution V2

cd GEM
pip install -r requirements.txt
cd codes/model/DCN_v2/
sh make.sh

Download model and data

Pretrained models and datasets can be downloaded via One Drive.
In our paper, we conduct experiments on three types of data:

MS-RBD capture system
Overview of MS-RBD
Examples of MS-RBD

Easy start

Initialization

Test

For testing on your own datasets, we recommend packing your data in the MS-RBD format and then modifying the following parameters in configs/msrbd_test.yaml according to your needs.

- load_dir:        # change it to your path to load checkpoints
- root_path:       # change it to your dataset directory
- save_path:       # change it to your result directory
- scale_factor:    # change it according to the spatial resolution ratio of images over events in your dataset

Then it is good to go.

Train

Citation

If you find our work useful in your research, please consider citing:

@inproceedings{zhang2023generalizing,
  title={Generalizing Event-Based Motion Deblurring in Real-World Scenarios},
  author={Zhang, Xiang and Yu, Lei and Yang, Wen and Liu, Jianzhuang and Xia, Gui-Song},
  year={2023},
  booktitle={ICCV},
}

Acknowledgement

This code is built based on the Pytorch Lightning template, LIIF, and Deformable Convolution V2.