baowenbo / MEMC-Net

MEMC-Net: Motion Estimation and Motion Compensation Driven Neural Network for Video Interpolation and Enhancement
MIT License
170 stars 22 forks source link

MEMC-Net (Motion Estimation and Motion Compensation Driven Neural Network for Video Interpolation and Enhancement)

Project | Paper

Wenbo Bao, Wei-Sheng Lai, Xiaoyun Zhang, Zhiyong Gao, and Ming-Hsuan Yang

Table of Contents

  1. Introduction
  2. Citation
  3. Requirements and Dependencies
  4. Installation
  5. Testing Pre-trained Video Frame Interpolation Models
  6. Testing Pre-trained Video Enhancement Models
  7. Downloading Results
  8. HD Dataset Results

We propose the Motion Estimation and Motion Compensation (MEMC) Driven Neural Network for video frame interpolation as well several other video enhancement tasks. A novel adaptive warping layer is proposed to integrate both optical flow and interpolation kernels to synthesize target frame pixels. Our method benefits from the ME and MC model-driven architecture while avoiding the conventional hand-crafted design by training on a large amount of video data. Extensive quantitative and qualitative evaluations demonstrate that the proposed method performs favorably against the state-of-the-art video frame interpolation and enhancement algorithms on a wide range of datasets.

Citation

If you find the code and datasets useful in your research, please cite:

@article{MEMC-Net,
     title={MEMC-Net: Motion Estimation and Motion Compensation Driven Neural Network for Video Interpolation and Enhancement},
     author={Bao, Wenbo and Lai, Wei-Sheng, and Zhang, Xiaoyun and Gao, Zhiyong and Yang, Ming-Hsuan},
     journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
     doi={10.1109/TPAMI.2019.2941941},
     year={2018}
}

Requirements and Dependencies

Installation

Download repository:

$ git clone https://github.com/baowenbo/MEMC-Net.git

Before building Pytorch extensions, be sure you have pytorch == 0.2 :

$ python -c "import torch; print(torch.__version__)"

Generate our PyTorch extensions:

$ cd MEMC-Net
$ cd my_package 
$ ./install.bash

Testing Pre-trained Video Frame Interpolation Models

Make model weights dir and Middlebury dataset dir:

$ cd MEMC-Net
$ mkdir model_weights
$ mkdir MiddleBurySet

Download pretrained models,

$ cd model_weights
$ wget http://vllab1.ucmerced.edu/~wenbobao/MEMC-Net/MEMC-Net_best.pth 
$ wget http://vllab1.ucmerced.edu/~wenbobao/MEMC-Net/MEMC-Net_s_best.pth
$ wget http://vllab1.ucmerced.edu/~wenbobao/MEMC-Net/MEMC-Net_star_best.pth

and Middlebury dataset:

$ cd ../MiddleBurySet
$ wget http://vision.middlebury.edu/flow/data/comp/zip/other-color-allframes.zip
$ unzip other-color-allframes.zip
$ wget http://vision.middlebury.edu/flow/data/comp/zip/other-gt-interp.zip
$ unzip other-gt-interp.zip
$ cd ..

We are good to go by:

$ CUDA_VISIBLE_DEVICES=0 python demo_MiddleBury.py

Or if you would like to try MEMC-Net_s or the MEMC-Net* (noted as MEMC-Net_star) model:

$ CUDA_VISIBLE_DEVICES=0 python demo_MiddleBury.py  --netName MEMC_Net_s --pretrained MEMC-Net_s_best.pth
$ CUDA_VISIBLE_DEVICES=0 python demo_MiddleBury.py  --netName MEMC_Net_star --pretrained MEMC-Net_star_best.pth

The interpolated results are under MiddleBurySet/other-result-author/[random number]/, where the random number is used to distinguish different runnings.

Testing Pre-trained Video Enhancement Models

Make model weights dir and Middlebury dataset dir:

$ cd MEMC-Net
$ mkdir model_weights
$ mkdir vimeo_sr_test vimeo_dn_test vimeo_db_test

Download pretrained models,

$ cd model_weights
$ wget http://vllab1.ucmerced.edu/~wenbobao/MEMC-Net/MEMC-Net_SR.pth 
$ wget http://vllab1.ucmerced.edu/~wenbobao/MEMC-Net/MEMC-Net_DN.pth
$ wget http://vllab1.ucmerced.edu/~wenbobao/MEMC-Net/MEMC-Net_DB.pth

and Vimeo dataset:

$ cd ../vimeo_sr_test
$ wget http://vllab1.ucmerced.edu/~wenbobao/MEMC-Net/vimeo_sr_test.zip
$ unzip vimeo_sr_test.zip 
$ cd ../vimeo_dn_test
$ wget http://vllab1.ucmerced.edu/~wenbobao/MEMC-Net/vimeo_dn_test.zip
$ unzip vimeo_dn_test.zip 
$ cd ../vimeo_db_test
$ wget http://vllab1.ucmerced.edu/~wenbobao/MEMC-Net/vimeo_db_test.zip
$ unzip vimeo_db_test.zip 
$ cd ..

We are good to go by:

$ CUDA_VISIBLE_DEVICES=0 python demo_Vimeo_VE.py --netName MEMC_Net_VE --pretrained MEMC-Net_SR.pth --datasetPath  ./vimeo_video_enhancement_test --datasetName Vimeo_90K_sr  --task sr --task_param 4.0
$ CUDA_VISIBLE_DEVICES=0 python demo_Vimeo_VE.py --netName MEMC_Net_VE --pretrained MEMC-Net_DN.pth --datasetPath  ./vimeo_video_enhancement_test  --datasetName Vimeo_90K_dn   --task denoise 
$ CUDA_VISIBLE_DEVICES=0 python demo_Vimeo_VE.py --netName MEMC_Net_VE --pretrained MEMC-Net_DB.pth --datasetPath  ./vimeo_video_enhancement_test  --datasetName Vimeo_90K_db  --task deblock 

The enhanced results are under vimeo_[sr, dn, db]_test/target_ours/[random number]/, where the random number is used to distinguish different runnings.

Downloading Results

Our MEMC-Net model achieves the state-of-the-art performance on the UCF101, Vimeo90K, and Middlebury ([eval](http://vision.middlebury.edu/flow/eval/results/results-n1.php) and other*). Dowload our interpolated results with:

$ wget http://vllab1.ucmerced.edu/~wenbobao/MEMC-Net/UCF101_MEMC-Net_star.zip
$ wget http://vllab1.ucmerced.edu/~wenbobao/MEMC-Net/Vimeo90K_interp_MEMC-Net_star.zip
$ wget http://vllab1.ucmerced.edu/~wenbobao/MEMC-Net/Middlebury_eval_MEMC-Net_star.zip
$ wget http://vllab1.ucmerced.edu/~wenbobao/MEMC-Net/Middlebury_other_MEMC-Net_star.zip

HD Dataset Results

For the HD dataset, the original ground truth videos can be obtained through this link. And the results by MEMC-Net* model are obtained through this link

Contact

Wenbo Bao; Wei-Sheng (Jason) Lai

License

See MIT License