Event to video reconstruction with spade module
This repository contains the CODE for the paper:SPADE-E2VID:
.
SPADE_E2VID uses a ConvLSTM and SPADE layers to reconstruct event-based videos. Our model compared with E2VID, have better reconstruction quality in early frames also has better contrast for all the reconstructios. We provide the code for training and testing.
A comparizon for SPADE_E2VID (our model) and E2VID.
Non-polarity Event-based video recontruction (Chinese Calendar).
Non-polarity Event-based video recontruction (the Shanghai Jiaotong Gate).
Prerequisites
Install PyTorch 1.3.0 (or higher), TorchVision, kornia, opencv, tqdm, pathlib, pandas, skimage, numpy, pytorch-msssim
Clone this repository
git clone https://github.com/RodrigoGantier/SPADE_E2VID.git
* Download the evaluation datasets and weigths. your directory tree should be as follows:<br>
├── SPADE_E2VID<br>
│ ├── cedric_firenet<br>
│ ├── dvs_datasets<br>
│ │ ├── bound_1<br>
│ │ ├── bound_2<br>
│ │ ├── bound_3<br>
│ │ ├── boxes_6dof<br>
│ │ ├── calibration<br>
│ │ ├── dynamic_6dof<br>
│ │ ├── office_zigzag<br>
│ │ ├── poster_6dof<br>
│ │ └── slider_depth<br>
│ ├── models<br>
│ │ ├── E2VID.pth.tar<br>
│ │ ├── E2VID_*.pth<br>
│ │ ├── E2VID_lightweight.pth.tar<br>
│ │ ├── firenet_1000.pth.tar<br>
│ │ ├── SPADE_E2VID.pth<br>
│ │ ├── SPADE_E2VID_2.pth<br>
│ │ └── SPADE_E2VID_ABS.pth<br>
│ ├── my_org_model<br>
│ ├── evs<br>
│ ├── org_e2vid<br>
│ ├── res<br>
│ └── spynet<br>
# Code
To run data evaluation with all models use the following code:
```java
python benchmark.py --root_dir /path/to/data/SPADE_E2VID
To run data evaluation with only one dataset and SPADE_E2VID, (you can choose fron 0 to 5):
python test.py --root_dir /path/to/data/SPADE_E2VID --data_n 0
To train ESPADE_E2VID you can run:
python train_e2v.py --root_dir /path/to/data/e2v_public --bs 2
Tested in ubuntu 18.04.4 LTS
DVS datasets
if you want to download one by one, the individual links are below
calibration dataset
boxes_6dof dataset
slider_depth dataset
poster_6dof dataset
office_zigzag dataset
dynamic_6dof dataset
bund_1 dataset
bund_2 dataset
bund_3 dataset
SPADE_E2VID
SPADE_E2VID_ABS
E2VID_
E2VID_lightweight
E2VID
FireNet
The Training dataset can be downkiad fron this link, are just 30 samples from the origianl 1000 samples
@article{cadena2021spade,
title={SPADE-E2VID: Spatially-Adaptive Denormalization for Event-Based Video Reconstruction},
author={Cadena, Pablo Rodrigo Gantier and Qian, Yeqiang and Wang, Chunxiang and Yang, Ming},
journal={IEEE Transactions on Image Processing},
volume={30},
pages={2488--2500},
year={2021},
publisher={IEEE}
}