mcrl / UDC-SIT

Other
6 stars 2 forks source link

UDC-SIT: A Real-World Dataset for Under-Display Cameras

This repository contains the dataset and benchmark DNN models of the following paper. The datasheets for datasets is available in this repository as a pdf file.

Kyusu Ahn, Byeonghyun Ko, HyunGyu Lee, Chanwoo Park, and Jaejin Lee. UDC-SIT: A Real-World Dataset for Under-Display Cameras. NeurIPS 2023: Proceedings of the 37th Conference on Neural Information Processing Systems, Article No. 2962, pp. 67721-67740, New Orleans, USA, December 2023.

[Paper]

What is UDC-SIT?

Well-aligned paired images of Under Display Camera (UDC) include various environments such as day/night, indoor/outdoor, and flares. The light sources (both natural sunlight and artificial light) and different environmental conditions can lead to various forms of degradation. We have incorporated annotations into our dataset to improve the performance of restoration models for UDC image restoration.

Why make this?

Under Display Camera (UDC) faces challenges related to image degradation, including issues such as low transmittance, blur, noise, and flare. Despite its significance, there has been a lack of real-world datasets in the UDC domain. Only synthetic images are available that do not accurately represent real-world degradation. As far as we know, it is the first real-world UDC dataset to overcome the problems of the existing UDC datasets.

Data versions and structure

You can acquire the download link of our dataset at our Research Group's hompage.

How can I use this?

You can download the dataset from the link above. When you conduct training, validation, and inference, just normalize in your PyTorch DataLoader as is generally being done in most image restoration DNN models. We recommend to train your model using .npy format with 4 channels rather than converting it to a 3 channels RGB domain. Note that our dataset is in Low Dynamic Range (LDR). Therefore, you don't have to conduct Reinhard tone-mapping. You can visualize our dataset by running ./dataset/visualize_sit.py, allowing for a visual inspection.

Annotation details

Who created this dataset?

The dataset is created by the authors of the paper as well as the members of the Thunder Research Group at Seoul National University, including Woojin Kim, Gyuseong Lee, Dongyoung Lee, Sangsoo Im, Gwangho Choi, Gyeongje Jo, Yeonkyoung So, Jiheon Seok, Jaehwan Lee, Donghun Choi, and Daeyoung Park, on behalf of universities and research institutions.

Citation

If you find our repository useful for your research, please consider citing our paper:

   @InProceedings{ahn2024udc,
      author    = {Ahn, Kyusu and Ko, Byeonghyun and Lee, HyunGyu and Park, Chanwoo and Lee, Jaejin},
      title     = {UDC-SIT: A Real-World Dataset for Under-Display Cameras},
      journal   = {Advances in Neural Information Processing Systems},
      volume    = {36},
      year      = {2024},
   }

Licences

Copyright (c) 2023 Thunder Research Group

UDC-SIT dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0). This means that you are allowed to freely utilize, share, and modify this work under the condition of properly attributing the original author, distributing any derived works under the same license, and utilizing it exclusively for non-commercial purposes.

All software for benchmark Deep Neural Network (DNN) models adheres to the license of the original authors. You can find the original source codes and their respective licenses for ECFNet, UDC-UNet, DISCNet, Uformer, and SRGAN in the links below.