Juewen Peng1, Zhiguo Cao1, Xianrui Luo1, Hao Lu1, Ke Xian1*, Jianming Zhang2
1Huazhong University of Science and Technology, 2Adobe Research
This repository is the official PyTorch implementation of the CVPR 2022 paper "BokehMe: When Neural Rendering Meets Classical Rendering".
NOTE: There is a citation mistake in the paper of the conference version. In section 4.1, the disparity maps of the EBB400 dataset are predicted by MiDaS [1] instead of DPT [2].
[1] Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer
[2] Vision Transformers for Dense Prediction
git clone https://github.com/JuewenPeng/BokehMe.git
cd BokehMe
pip install -r requirements.txt
python demo.py --image_path 'inputs/21.jpg' --disp_path 'inputs/21.png' --save_dir 'outputs' --K 60 --disp_focus 90/255 --gamma 4 --highlight
image_path
: path of the input all-in-focus imagedisp_path
: path of the input disparity map (predicted by DPT in this example)save_dir
: directory to save the resultsK
: blur parameterdisp_focus
: refocused disparity (range from 0 to 1)gamma
: gamma value (range from 1 to 5)highlight
: enhance RGB values of highlights before rendering for stunning bokeh ballsSee demo.py
for more details.
The BLB dataset is synthesized by Blender 2.93. It contains 10 scenes, each consisting of an all-in-focus image, a disparity map, a stack of bokeh images with 5 blur amounts and 10 refocused disparities, and a parameter file. We additionally provide 15 corrupted disparity maps (through gaussian blur, dilation, erosion) for each scene. Our BLB dataset can be downloaded from Google Drive or Baidu Netdisk.
Instructions:
image = cv2.imread(IMAGE_PATH, -1)[..., :3].astype(np.float32) ** (1/2.2)
. The loaded images are in BGR, so you can convert them to RGB by image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
if necessary.depth = cv2.imread(DEPTH_PATH, -1)[..., 0].astype(np.float32)
. You can convert them to disparity maps by disp = 1 / depth
. Note that it is unnecesary to normalize the disparity maps since we have pre-processed them to ensure that the signed defocus maps calculated by K * (disp - disp_focus)
are in line with the experimental settings of the paper.If you find our work useful in your research, please cite our paper.
@inproceedings{Peng2022BokehMe,
title = {BokehMe: When Neural Rendering Meets Classical Rendering},
author = {Peng, Juewen and Cao, Zhiguo and Luo, Xianrui and Lu, Hao and Xian, Ke and Zhang, Jianming},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022}
}