chenkang455 / TRMD

Official PyTorch implementation of "Motion Deblur by Learning Residual from Events". (TMM 2024)
9 stars 0 forks source link

Motion Deblur by Learning Residual from Events (TMM 2024)

If you like our project, please give us a star ⭐ on GitHub.
**Authors:** Kang Chen and [Lei Yu](http://eis.whu.edu.cn/index/szdwDetail?rsh=00030713&newskind_id=20160320222026165YIdDsQIbgNtoE)βœ‰οΈ from Wuhan university, Wuhan, China. [![IEEE](https://img.shields.io/badge/IEEE-Xplore-blue.svg?logo=IEEE)](https://doi.org/10.1109/TMM.2024.3355630) [![License](https://img.shields.io/badge/License-MIT-yellow)](https://github.com/chenkang455/TRMD) [![GitHub repo stars](https://img.shields.io/github/stars/chenkang455/TRMD?style=flat&logo=github&logoColor=whitesmoke&label=Stars)](https://github.com/chenkang455/TRMD/stargazers) 

πŸ“• Abstract

We propose a Two-stage Residual-based Motion Deblurring (TRMD) framework for an event camera, which converts a blurry image into a sequence of sharp images, leveraging the abundant motion features encoded in events. In the first stage, a residual estimation network is trained to estimate the residual sequence, which measures the intensity difference between the intermediate frame and other frames sampled during the exposure. In the subsequent stage, the previously estimated residuals are combined with the blurry image to reconstruct the deblurred sequence based on the physical model of motion blur.

πŸ‘€ Visual Comparisons

GoPro dataset

gopro_table

REBlur dataset

reblur_table

🌏 Setup environment

git clone https://github.com/chenkang455/TRMD
cd TRMD
pip install -r requirements.txt

πŸ•Ά Download datasets

You can download our trained models, synthesized dataset GOPRO and real event dataset REBlur (from EFNet) from Baidu Netdisk with the password eluc.

Unzip the GOPRO.zip file before placing the downloaded models and datasets (path defined in config.yaml) according to the following directory structure:

β”œβ”€β”€ Data                                                                                                                                                            
β”‚Β Β  β”œβ”€β”€ GOPRO                                                                                              
β”‚Β Β  β”‚Β Β  └── train                                                                                                                             
β”‚Β Β  β”‚Β   └── test                                                                                    
|   β”œβ”€β”€ REBlur
|   |   └── train
|   |   └── test   
|   |   └── addition
|   |   └── README.md 
β”œβ”€β”€ Pretrained_Model
β”‚Β Β  β”œβ”€β”€ RE_Net.pth 
β”‚Β Β  β”œβ”€β”€ RE_Net_rgb.pth 
β”œβ”€β”€ config.yaml
β”œβ”€β”€ ...

🍭 Configs

Change the data path and other parameters (if needed) in config.yaml.

πŸŒ… Test with our pre-trained models

πŸ“Š Training

πŸ“ž Contact

Should you have any questions, please feel free to contact mrchenkang@whu.edu.cn or ly.wd@whu.edu.cn.

🀝 Citation

If you find our work useful in your research, please cite:

@article{chen2024motion,
  title={Motion Deblur by Learning Residual from Events},
  author={Chen, Kang and Yu, Lei},
  journal={IEEE Transactions on Multimedia},
  year={2024},
  publisher={IEEE} 
}

πŸ™‡β€ Acknowledgment

Our event representation (SCER) code and REBlur dataset are derived from EFNet. Some of the code for metric testing and module construction is from E-CIR. We appreciate the effort of the contributors to these repositories.