mv-lab / AISP

AI Image SIgnal Processing and Computational Photography - Bokeh Rendering , Reversed ISP Challenge, Model-Based Image Signal Processors via Learnable Dictionaries. Official repo for NTIRE and AIM Challenges
https://mv-lab.github.io/model-isp22/
367 stars 41 forks source link
aim computational-photography computer-vision cvpr2022 deblurring deep-learning denoising eccv2022 image-enhancement image-processing image-restoration inverse-problems isp low-level-vision mobile-ai ntire raw-image reversed-isp

AI Image Signal Processing and Computational Photography

Deep learning for low-level computer vision and imaging

isp lpienet bokeh ntire23 visitors

Marcos V. Conde, Radu Timofte

Computer Vision Lab, CAIDAS, University of Würzburg


Topics This repository contains material for RAW image processing, RAW image reconstruction and synthesis, learned Image Signal Processing (ISP), Image Enhancement and Restoration (denoising, deblurring), Multi-lense Bokeh effect rendering, and much more! 📷


Official repository for the following works:

  1. Efficient Multi-Lens Bokeh Effect Rendering and Transformation at CVPR NTIRE 2023.
  2. Perceptual Image Enhancement for Smartphone Real-Time Applications (LPIENet) at WACV 2023.
  3. Reversed Image Signal Processing and RAW Reconstruction. AIM 2022 Challenge Report ECCV, AIM 2022
  4. Model-Based Image Signal Processors via Learnable Dictionaries AAAI 2022 Oral
  5. MAI 2022 Learned ISP Challenge Complete Baseline solution
  6. Citation and Acknowledgement | Contact for any inquiries.

News 🚀🚀


Efficient Multi-Lens Bokeh Effect Rendering and Transformation (CVPRW '23)

This work is the state-of-the-art method for bokeh rendering and transformation and baseline of the NTIRE 2023 Bokeh Challenge.

Read the full paper at: Efficient Multi-Lens Bokeh Effect Rendering and Transformation


Perceptual Image Enhancement for Smartphone Real-Time Applications (WACV '23)

This work was presented at the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2023.

Recent advances in camera designs and imaging pipelines allow us to capture high-quality images using smartphones. However, due to the small size and lens limitations of the smartphone cameras, we commonly find artifacts or degradation in the processed images e.g., noise, diffraction artifacts, blur, and HDR overexposure. We propose LPIENet, a lightweight network for perceptual image enhancement, with the focus on deploying it on smartphones.

The code is available at lpienet including versions in Pytorch and Tensorflow. We also include the model conversion to TFLite, so you can generate the corresponding .tflite file and run the model using the AI Benchmark app on android devices. In lpienet-tflite.ipynb you can find a complete tutorial to transform the model to tflite.

Contributions

lpienet



Model-Based Image Signal Processors via Learnable Dictionaries (AAAI '22 Oral)

This work was presented at the 36th AAAI Conference on Artificial Intelligence, Spotlight (15%)

Project website where you can find the poster, presentation and more information.

Hybrid model-based and data-driven approach for modelling ISPs using learnable dictionaries. We explore RAW image reconstruction and improve downstream tasks like RAW Image Denoising via raw data augmentation-synthesis.

mbdlisp

If you have implementation questions or you need qualitative samples for comparison, please contact me. You can download the figure/illustration of our method in mbispld.



AIM 2022 Reversed ISP Challenge

This work was presented at the European Conference on Computer Vision (ECCV) 2022, AIM workshop.

Track 1 - S7 | Track 2 - P20

aim-challenge-teaser

In this challenge, we look for solutions to recover RAW readings from the camera using only the corresponding RGB images processed by the in-camera ISP. Successful solutions should generate plausible RAW images, and by doing this, other downstream tasks like Denoising, Super-resolution or Colour Constancy can benefit from such synthetic data generation. Click here to read more information about the challenge.

Starter guide and code 🔥


MAI 2022 Learned ISP Challenge

You can find at mai22-learnedisp and end-to-end baseline: dataloading, training top solution, model conversion to tflite. The model achieved 23.46dB PSNR after training for a few hours. Here you can see a sample RAW input and the resultant RGB.

We test the model on AI Benchmark. The model average latency is 60ms using a input RAW image 544,960,4 and generating a RGB 1088,1920,3, in a mid-level smartphone (45.4 AI-score) using Delegate GPU and FP16.


Citation and Acknowledgement

@inproceedings{conde2022model,
  title={Model-Based Image Signal Processors via Learnable Dictionaries},
  author={Conde, Marcos V and McDonagh, Steven and Maggioni, Matteo and Leonardis, Ales and P{\'e}rez-Pellitero, Eduardo},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={36},
  number={1},
  pages={481--489},
  year={2022}
}

@inproceedings{conde2022aim,
  title={{R}eversed {I}mage {S}ignal {P}rocessing and {RAW} {R}econstruction. {AIM} 2022 {C}hallenge {R}eport},
  author={Conde, Marcos V and Timofte, Radu and others},
  booktitle={Proceedings of the European Conference on Computer Vision Workshops (ECCVW)},
  year={2022}
}

Contact

Marcos Conde (marcos.conde@uni-wuerzburg.de) is the contact persons and co-organizer of NTIRE and AIM challenges.