nachifur / RDDM

CVPR 2024: Residual Denoising Diffusion Models
https://arxiv.org/abs/2308.13712
367 stars 34 forks source link

Residual Denoising Diffusion Models

paper|arxiv|youtube|blog|中文论文(ao9l)|中文视频|中文博客

This repository is the official implementation of Residual Denoising Diffusion Models.

RDDM

Requirements

To install requirements:

conda env create -f install.yaml

Dataset

Raindrop (test-a for test)

GoPro

ISTD

SID-RGB: kexu or download

LOL

CelebA

Training

To train RDDM, run this command:

cd experiments/xxxx
python train.py

or

accelerate launch train.py

Evaluation

To evaluate image generation, run:

cd eval/image_generation_eval/
python fid_and_inception_score.py path_of_gen_img

For image restoration, MATLAB evaluation codes in ./eval.

Pre-trained Models

Two unets (deresidual+denoising) for partially path-independent generation process

Results

See Table 3 in main paper.

For image restoration:

Raindrop

GoPro

ISTD

LOL

SID-RGB

For image generation (on the CelebA dataset):

We can convert a pre-trained DDIM to RDDM by coefficient transformation (see 1_Image_Generation_convert_pretrained_DDIM_to_RDDM).

Citation

If you find our work useful in your research, please consider citing:

@InProceedings{Liu_2024_CVPR,
    author    = {Liu, Jiawei and Wang, Qiang and Fan, Huijie and Wang, Yinong and Tang, Yandong and Qu, Liangqiong},
    title     = {Residual Denoising Diffusion Models},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2024},
    pages     = {2773-2783}
}

Contact

Please contact Liangqiong Qu (https://liangqiong.github.io/) or Jiawei Liu (liujiawei18@mails.ucas.ac.cn) if there is any question.