we propose the first photo-realistic dataset of synthetic adherent raindrops with pixel-level mask for the training of raindrop removal.
Picture: Visual comparison of raindrop removal in real rainy scenes. Our method removes most of raindrops although the raindrops have large variety.
We use c++ to generate the raindrop dataset.
Picture: Samples of our synthetic raindrop images. Top: The ground truth clear image in Cityscapes dataset. Middle: The synthetic raindrop image produced by our refraction model. Bottom: The ground truth binary mask of the raindrops.
Picture: Refraction model.
Please follow these steps to generate the synthetic dataset.
Prepare the data. Download the cityscapes dataset from their website. Only the RGB images are needed.
Generate the images with raindrops.
cd data_generation/makeRain/
# Install libs in 3rdparty/
# Specific the path of the cityscapes dataset in L19 of main.cpp
# Specific the save path of your dataset in L41 of main.cpp
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Release ..
make -j8
# Then run the executable file in the build/
Generate the edge of the input image. Similar to step 2, cd data_generation/rainEdge
and run similar cmds.
Picture: Refraction model. The light ray colored in green does not go through any raindrops. The light ray colored in yellow goes through a raindrop and is refracted twice.
The training and test scripts can be found in the removal/
For instance, in training phase:
ardcnn
icnn
combine
combine_fine
The test phase is similar to the training phase.
If you find this repo is useful to your work, please cite our paper
@inproceedings{hao2019learning,
title={Learning from synthetic photorealistic raindrop for single image raindrop removal},
author={Hao, Zhixiang and You, Shaodi and Li, Yu and Li, Kunming and Lu, Feng},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops},
pages={0--0},
year={2019}
}