Yinhuai Wang*, Jiwen Yu*, Jian Zhang
Peking University and PCL
*denotes equal contribution
This repository contains the code release for *Zero-Shot Image Restoration Using Denoising Diffusion Null-Space Model*. DDNM can solve various image restoration tasks without any optimization or training! Yes, in a zero-shot manner.
Supported Applications:
git clone https://github.com/wyhuai/DDNM.git
pip install numpy torch blobfile tqdm pyYaml pillow # e.g. torch 1.7.1+cu110.
To restore human face images, download this model(from SDEdit) and put it into DDNM/exp/logs/celeba/
.
https://drive.google.com/file/d/1wSoA5fm_d6JBZk4RZ1SzWLMgev4WqH21/view?usp=share_link
To restore general images, download this model(from guided-diffusion) and put it into DDNM/exp/logs/imagenet/
.
wget https://openaipublic.blob.core.windows.net/diffusion/jul-2021/256x256_diffusion_uncond.pt
Run below command to get 4x SR results immediately. The results should be in DDNM/exp/image_samples/demo
.
python main.py --ni --simplified --config celeba_hq.yml --path_y celeba_hq --eta 0.85 --deg "sr_averagepooling" --deg_scale 4.0 --sigma_y 0 -i demo
The detailed sampling command is here:
python main.py --ni --simplified --config {CONFIG}.yml --path_y {PATH_Y} --eta {ETA} --deg {DEGRADATION} --deg_scale {DEGRADATION_SCALE} --sigma_y {SIGMA_Y} -i {IMAGE_FOLDER}
with following options:
--simplified
to activate the simplified DDNM. Without --simplified
will turn to the SVD-based DDNM.PATH_Y
is the folder name of the test dataset, in DDNM/exp/datasets
.ETA
is the DDIM hyperparameter. (default: 0.85
)DEGREDATION
is the supported tasks including cs_walshhadamard
, cs_blockbased
, inpainting
, denoising
, deblur_uni
, deblur_gauss
, deblur_aniso
, sr_averagepooling
,sr_bicubic
, colorization
, mask_color_sr
, and user-defined diy
.DEGRADATION_SCALE
is the scale of degredation. e.g., --deg sr_bicubic --deg_scale 4
lead to 4xSR.SIGMA_Y
is the noise observed in y.CONFIG
is the name of the config file (see configs/
for a list), including hyperparameters such as batch size and sampling step.IMAGE_FOLDER
is the folder name of the results.For the config files, e.g., celeba_hq.yml, you may change following properties:
sampling:
batch_size: 1
time_travel:
T_sampling: 100 # sampling steps
travel_length: 1 # time-travel parameters l and s, see section 3.3 of the paper.
travel_repeat: 1 # time-travel parameter r, see section 3.3 of the paper.
Dataset download link: [Google drive] [Baidu drive]
Download the CelebA testset and put it into DDNM/exp/datasets/celeba/
.
Download the ImageNet testset and put it into DDNM/exp/datasets/imagenet/
and replace the file DDNM/exp/imagenet_val_1k.txt
.
Run the following command. You may increase the batch_size to accelerate evaluation.
sh evaluation.sh
You can try this Colab demo for High-Quality results. Note that the High-Quality results presented in the front figure are mostly generated by applying DDNM to the models in RePaint.
Run the following command
python main.py --ni --simplified --config celeba_hq.yml --path_y solvay --eta 0.85 --deg "sr_averagepooling" --deg_scale 4.0 --sigma_y 0.1 -i demo
Run the following command
python main.py --ni --simplified --config oldphoto.yml --path_y oldphoto --eta 0.85 --deg "mask_color_sr" --deg_scale 2.0 --sigma_y 0.02 -i demo
You may use DDNM to restore your own degraded images. DDNM provides full flexibility for you to define the degradation operator and the noise level. Note that these definitions are critical for a good results. You may reference the following guidance.
DDNM/exp/inp_masks/mask.png
. Then run DDNM/exp/inp_masks/get_mask.py
to generate mask.npy
.--deg_scale
.sigma_y
to remove these artifacts.args.deg =='diy'
in DDNM/guided_diffusion/diffusion.py
and change the definition of $\mathbf{A}$ correspondingly.
Then run
python main.py --ni --simplified --config celeba_hq.yml --path_y {YOUR_OWN_PATH} --eta 0.85 --deg "diy" --deg_scale {YOUR_OWN_SCALE} --sigma_y {YOUR_OWN_LEVEL} -i diy
Above we show an example of using DDNM to SR a 64x256 input image into a 256x1024 result. The theory details can be found in this paper, section 3.3.
We implement the Mask-Shift Restoration in the folder hq_demo
, based on RePaint. You can try this Colab demo.
Or, you can try this function on your own device, you need to download the pre-trained models:
wget https://openaipublic.blob.core.windows.net/diffusion/jul-2021/256x256_classifier.pt
wget https://openaipublic.blob.core.windows.net/diffusion/jul-2021/256x256_diffusion.pt
and put it to hq_demo/data/pretrained
. Then run
cd hq_demo
sh evaluation.sh
This script contains SR results up to 2K resolution. It may take hours to finish some demos in this script. Setting a smaller sampling step or time-travel parameters in hq_demo/confs/inet256.yml can speed up, but may compromise the generative quality.
It is very easy to implement a basic DDNM on your own diffusion model! You may reference the following:
IR_mode="super resolution"
.
def color2gray(x):
coef=1/3
x = x[:,0,:,:] * coef + x[:,1,:,:]*coef + x[:,2,:,:]*coef
return x.repeat(1,3,1,1)
def gray2color(x): x = x[:,0,:,:] coef=1/3 base = coef2 + coef2 + coef*2 return th.stack((xcoef/base, xcoef/base, xcoef/base), 1)
def PatchUpsample(x, scale): n, c, h, w = x.shape x = torch.zeros(n,c,h,scale,w,scale) + x.view(n,c,h,1,w,1) return x.view(n,c,scaleh,scalew)
if IR_mode=="colorization": A = color2gray Ap = gray2color
elif IR_mode=="inpainting": A = lambda z: z*mask Ap = A
elif IR_mode=="super resolution": A = torch.nn.AdaptiveAvgPool2d((256//scale,256//scale)) Ap = lambda z: PatchUpsample(z, scale)
elif IR_mode=="old photo restoration": A1 = lambda z: z*mask A1p = A1
A2 = color2gray
A2p = gray2color
A3 = torch.nn.AdaptiveAvgPool2d((256//scale,256//scale))
A3p = lambda z: PatchUpsample(z, scale)
A = lambda z: A3(A2(A1(z)))
Ap = lambda z: A1p(A2p(A3p(z)))
2. Find the variant $\mathbf{x}\_{0|t}$ in the target codes, and use the result of this function to modify the sampling of $\mathbf{x}\_{t-1}$. Your may need to provide the input degraded image $\mathbf{y}$ and the corresponding noise level $\sigma_\mathbf{y}$.
```python
# Core Implementation of DDNM+, simplified denoising solution (Section 3.3).
# For more accurate denoising, please refer to the paper (Appendix I) and the source code.
def ddnm_plus_core(x0t, y, sigma_y=0, sigma_t, a_t):
#Eq 19
if sigma_t >= a_t*sigma_y:
lambda_t = 1
gamma_t = sigma_t**2 - (a_t*lambda_t*sigma_y)**2
else:
lambda_t = sigma_t/(a_t*sigma_y)
gamma_t = 0
#Eq 17
x0t= x0t + lambda_t*Ap(y - A(x0t))
return x0t, gamma_t
arg.simplified
in DDNM/guided_diffusion/diffusion.py
for related codes. If you find this repository useful for your research, please cite the following work.
@article{wang2022zero,
title={Zero-Shot Image Restoration Using Denoising Diffusion Null-Space Model},
author={Wang, Yinhuai and Yu, Jiwen and Zhang, Jian},
journal={The Eleventh International Conference on Learning Representations},
year={2023}
}
This implementation is based on / inspired by: