UWGAN takes color image and its depth map as input, then it synthesizes underwater realistic images based on underwater optical imaging model by learning parameters through generative adversarial training. You can find more details in the paper.
Synthetic underwater-style images through UWGAN. (a) are in-air sample images, (b)-(d) are synthetic underwater-style sample images of different water types.
Proposed U-net Architecture for underwater image restoration and enhancement. The effects of different loss functions in U-net are compared, the most suitable loss function for underwater image restoration is suggested based on the comparison, you can find more details about loss function in paper.
Download data:
In-air RGBD data: NYU Depth Dataset V1, NYU Depth Dataset V2
Underwater images: [Baidu Cloud Link] [Google Drive]
UIEB Dataset for verification: [github link]
The NYU datasets we used to train UWGAN: [Baidu Cloud Link] [Google Drive]
fake water images generated from UWGAN: [Google Drive]
pretrained model:[Google Drive]
Data directory structure in UWGAN
.
├── ...
├── data
│ ├── air_images
│ │ └── *.png
│ ├── air_depth
│ │ └── *.mat
│ └── water_images
│ └── *.jpg
└── ...
python uwgan_mian.py
, you can adjust learning parameters in uwgan_main.py
.python train.py
, you can adjust learning parameters and change loss functions in train.py
. Run python test.py
after training has been completed.
The effects of different loss functions in restoration network (UNet)
If you find this work useful for your research, please cite this article in your publications.
@misc{wang2019uwgan,
title={UWGAN: Underwater GAN for Real-world Underwater Color Restoration and Dehazing},
author={Nan Wang and Yabin Zhou and Fenglei Han and Haitao Zhu and Yaojing Zheng},
year={2019},
eprint={1912.10269},
archivePrefix={arXiv},
primaryClass={eess.IV}
}