HeZongyao / LMF

[CVPR 2024 Highlight] LMF (Latent Modulated Function for Computational Optimal Continuous Image Representation)
Apache License 2.0
42 stars 1 forks source link
aribitrary-scale effciency implicit-neural-representation super-resolution

Latent Modulated Function for Computational Optimal Continuous Image Representation


Zongyao He1, Zhi Jin1,Corresponding author 1 SUN YAT-SEN University
[![project page](https://img.shields.io/badge/Project-Page-green)](https://github.com/HeZongyao/LMF) [![arxiv paper](https://img.shields.io/badge/arXiv-Paper-red)](https://arxiv.org/abs/2404.16451) [![conference paper](https://img.shields.io/badge/Conference-Paper-blueviolet)](https://openaccess.thecvf.com/content/CVPR2024/html/He_Latent_Modulated_Function_for_Computational_Optimal_Continuous_Image_Representation_CVPR_2024_paper.html) [![license](https://img.shields.io/badge/License-Apache_2.0-blue)](https://opensource.org/licenses/Apache-2.0)

Introduction

This repository contains the official PyTorch implementation for the CVPR 2024 Highlight paper titled "Latent Modulated Function for Computational Optimal Continuous Image Representation" by Zongyao He and Zhi Jin.

Efficiency comparison
Efficiency comparisons (320 × 180 input) for Arbitrary-Scale Super-Resolution
Framework
Framework of our LMF-based continuous image representation

Abstract

The recent work Local Implicit Image Function (LIIF) and subsequent Implicit Neural Representation (INR) based works have achieved remarkable success in Arbitrary-Scale Super-Resolution (ASSR) by using MLP to decode Low-Resolution (LR) features. However, these continuous image representations typically implement decoding in High-Resolution (HR) High-Dimensional (HD) space, leading to a quadratic increase in computational cost and seriously hindering the practical applications of ASSR.

To tackle this problem, we propose a novel Latent Modulated Function (LMF), which decouples the HR-HD decoding process into shared latent decoding in LR-HD space and independent rendering in HR Low-Dimensional (LD) space, thereby realizing the first computational optimal paradigm of continuous image representation. Specifically, LMF utilizes an HD MLP in latent space to generate latent modulations of each LR feature vector. This enables a modulated LD MLP in render space to quickly adapt to any input feature vector and perform rendering at arbitrary resolution. Furthermore, we leverage the positive correlation between modulation intensity and input image complexity to design a Controllable Multi-Scale Rendering (CMSR) algorithm, offering the flexibility to adjust the decoding efficiency based on the rendering precision.

Extensive experiments demonstrate that converting existing INR-based ASSR methods to LMF can reduce the computational cost by up to 99.9%, accelerate inference by up to 57×, and save up to 76% of parameters, while maintaining competitive performance.

Requirements

Ensure your environment meets the following prerequisites:

Quick Start

1. Download Pre-trained Models:

2. Download Benchmark Datasets:

Download the benchmark datasets from EDSR-PyTorch Repository and store them in the ./load folder. The benchmark datasets include:

3. Testing

Training & Testing

1. Download Training Dataset

2. Train Your Model

3. Evaluation

TODO

Acknowledgement

This work was supported by Frontier Vision Lab, SUN YAT-SEN University.

Special acknowledgment goes to the following projects: LIIF, LTE, CiaoSR, and DIIF.

Citation

If you find this work helpful, please consider citing:

@InProceedings{He_2024_CVPR,
    author={He, Zongyao and Jin, Zhi},
    title={Latent Modulated Function for Computational Optimal Continuous Image Representation},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month={June},
    year={2024},
    pages={26026-26035}
}

Feel free to reach out for any questions or issues related to the project. Thank you for your interest!