JuewenPeng / BokehMe

BokehMe: When Neural Rendering Meets Classical Rendering (CVPR 2022 Oral)
Apache License 2.0
176 stars 9 forks source link

BokehMe: When Neural Rendering Meets Classical Rendering (CVPR 2022 Oral)

Juewen Peng1, Zhiguo Cao1, Xianrui Luo1, Hao Lu1, Ke Xian1*, Jianming Zhang2

1Huazhong University of Science and Technology, 2Adobe Research

Project | Paper | Supp | Poster | Video | Data

This repository is the official PyTorch implementation of the CVPR 2022 paper "BokehMe: When Neural Rendering Meets Classical Rendering".

NOTE: There is a citation mistake in the paper of the conference version. In section 4.1, the disparity maps of the EBB400 dataset are predicted by MiDaS [1] instead of DPT [2].

[1] Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer
[2] Vision Transformers for Dense Prediction

Installation

git clone https://github.com/JuewenPeng/BokehMe.git
cd BokehMe
pip install -r requirements.txt

Usage

python demo.py --image_path 'inputs/21.jpg' --disp_path 'inputs/21.png' --save_dir 'outputs' --K 60 --disp_focus 90/255 --gamma 4 --highlight

See demo.py for more details.

BLB Dataset

The BLB dataset is synthesized by Blender 2.93. It contains 10 scenes, each consisting of an all-in-focus image, a disparity map, a stack of bokeh images with 5 blur amounts and 10 refocused disparities, and a parameter file. We additionally provide 15 corrupted disparity maps (through gaussian blur, dilation, erosion) for each scene. Our BLB dataset can be downloaded from Google Drive or Baidu Netdisk.

Instructions:

Citation

If you find our work useful in your research, please cite our paper.

@inproceedings{Peng2022BokehMe,
  title = {BokehMe: When Neural Rendering Meets Classical Rendering},
  author = {Peng, Juewen and Cao, Zhiguo and Luo, Xianrui and Lu, Hao and Xian, Ke and Zhang, Jianming},
  booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2022}
}