Ysz2022 / NeRCo

[ICCV 2023] Implicit Neural Representation for Cooperative Low-light Image Enhancement
https://openaccess.thecvf.com/content/ICCV2023/html/Yang_Implicit_Neural_Representation_for_Cooperative_Low-light_Image_Enhancement_ICCV_2023_paper.html
211 stars 14 forks source link
iccv iccv2023 low-light-image low-light-image-enhancement multi-modal-learning neural-representation
# 【ICCV'2023🔥】Implicit Neural Representation for Cooperative Low-light Image Enhancement [![Conference](http://img.shields.io/badge/ICCV-2023-FFD93D.svg)](https://iccv2023.thecvf.com/) [![Paper](http://img.shields.io/badge/Paper-Openaccess-FF6B6B.svg)](https://openaccess.thecvf.com/content/ICCV2023/html/Yang_Implicit_Neural_Representation_for_Cooperative_Low-light_Image_Enhancement_ICCV_2023_paper.html)

Welcome! This is the official implementation of our paper: Implicit Neural Representation for Cooperative Low-light Image Enhancement

Authors: Shuzhou Yang, Moxuan Ding, Yanmin Wu, Zihan Li, Jian Zhang*.

📣 News

Overview

avatar

Prerequisites

🔑 Setup

Type the command:

pip install -r requirements.txt

🧩 Download

You need create a directory ./saves/[YOUR-MODEL] (e.g., ./saves/LSRW). \ Download the pre-trained models and put them into ./saves/[YOUR-MODEL]. \ Here we release two versions of the pre-trained model, which are trained on LSRW and LOL datasets respectively:

🚀 Quick Run

🤖 Training

📌 Citation

If you find this code useful for your research, please use the following BibTeX entry.

@InProceedings{Yang_2023_ICCV,
    author    = {Yang, Shuzhou and Ding, Moxuan and Wu, Yanmin and Li, Zihan and Zhang, Jian},
    title     = {Implicit Neural Representation for Cooperative Low-light Image Enhancement},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2023},
    pages     = {12918-12927}
}