axium / Blind-Image-Deconvolution-using-Deep-Generative-Priors

Paper implemention of paper "Blind Image Deconvolution using Deep Generative Priors"
MIT License
30 stars 12 forks source link

the demo of Deblur an image. #1

Closed jichen3000 closed 5 years ago

jichen3000 commented 5 years ago

Hi,

I have strong interesting on this project. Could you offer a script about how to use it to Deblur an image? So the input is a blur image, the output is the deblur image.

axium commented 5 years ago

If you are generating synthetic blurry images, then I suggest you to use our implementation for blurry image generation in "Utils.py". If you have some blurry image that you would like to try out, simply change each algorithm script, by directly reading the blurred images with appropriate size (see paper) and rescaling it to [0,1] range. You will need to comment out the code that takes original images, and blurs and generates blurry images.

The code is commented so I do not think this should be an issue, but if you still want I can send you a script.

I would also like to point out that you should not expect this to work for any arbitrary image, rather specific images for which GAN has been trained. For example, CelebA images, have been aligned and cropped. If unaligned images are given, you should expect a drastic performance drop.

jichen3000 commented 5 years ago

Hi Axium,

I just used my own face image which has been aligned, then Deblur by your algorithm. Then I get the face from CelebA.

I guess it is because as you said "I would also like to point out that you should not expect this to work for any arbitrary image, rather specific images for which GAN has been trained".

Is it right?

axium commented 5 years ago

If you are using Algorithm 1 then, yes, due to range issue that would be the case. Algorithm 2 would definitely mitigate this issue. [apologies for the late response]

jichen3000 commented 5 years ago

But I got bad results by algorithm 2, using my own aligned pictures.

The below is my code:

import tensorflow as tf
import keras.backend as K
import numpy as np
from Utils import *
from generators.MotionBlurGenerator import *
from generators.CelebAGenerator import *
K.set_learning_phase(0)
from glob import glob
import os
import time

# paths
Range_Path      = './my_results/box_64_images/*.jpg'
SAVE_PATH       = './my_results/deblurring_alg2_images'
# check if save dir exists, if not create a new one
try:
    os.stat(SAVE_PATH)
except:
    os.mkdir(SAVE_PATH)

# loading celeba test images
Y_np = np.array([ imread(path) for path in glob(Range_Path)])/255
IMAGE_RES = Y_np.shape[1]
CHANNELS = Y_np.shape[-1]

# loading celeba generator
CelebAGen = CelebAGenerator()
CelebAGen.GenerateModel()
CelebAGen.LoadWeights()
CelebAGAN = CelebAGen.GetModels()
CelebAGAN.trainable = False
# celeba_latent_dim = 100
celeba_latent_dim = CelebAGen.latent_dim

# loading motion blur generator
BLURGen = MotionBlur()
BLURGen.GenerateModel()
BLURGen.LoadWeights()
## only using blur_decoder
blur_vae, blur_encoder, blur_decoder = BLURGen.GetModels()
blur_decoder.trainable = False
# blur_latent_dim = 50
blur_latent_dim = BLURGen.latent_dim

## load weights
# I save the trained weights from dublurring_celeba_algorithm_2.py
zi_hat = np.load("zi_hat.npy")
zk_hat = np.load("zk_hat.npy")
x_hat = np.load("x_hat.npy")

RANDOM_RESTARTS = 10
# result shape
BLUR_RES = 28
# extracting best images from random restarts with minimum residual error
X_Hat = []
XG_Hat   = []
W_Hat = []

start_time = time.time()
for i in range(len(Y_np)):
    zi_hat_i = zi_hat[i*RANDOM_RESTARTS:(i+1)*RANDOM_RESTARTS]
    zk_hat_i = zk_hat[i*RANDOM_RESTARTS:(i+1)*RANDOM_RESTARTS]
    x_hat_i    = x_hat[i*RANDOM_RESTARTS:(i+1)*RANDOM_RESTARTS]
    w_hat_i    = blur_decoder.predict(zk_hat_i)[:,:,:,0]
    x_hat_i      = np.clip(x_hat_i, 0, 1)
    loss_i       = [ComputeResidual(Y_np[i], x, w) for x,w in zip(x_hat_i,w_hat_i)]
    min_loss_loc = np.argmin(loss_i)

    zi_hat_recov = zi_hat_i[min_loss_loc].reshape([1,celeba_latent_dim])
    zk_hat_recov = zk_hat_i[min_loss_loc].reshape([1,blur_latent_dim])
    x_hat_recov  = x_hat_i[min_loss_loc] 
    w_hat = blur_decoder.predict(zk_hat_recov).reshape(BLUR_RES,BLUR_RES)
    xg_hat = CelebAGAN.predict(zi_hat_recov).reshape(IMAGE_RES,IMAGE_RES,CHANNELS)
    X_Hat.append(x_hat_recov); W_Hat.append(w_hat); XG_Hat.append(xg_hat)

pass_time = time.time() - start_time
print("pass_time:",pass_time)
X_Hat = np.array(X_Hat)
W_Hat = np.array(W_Hat)
XG_Hat = np.array(XG_Hat)
# X_Hat.shape: (2, 64, 64, 3)
# W_Hat.shape: (2, 28, 28)
# XG_Hat.shape: (2, 64, 64, 3)

# normalizing images
X_Hat = np.clip(X_Hat, 0,1)
XG_Hat = (XG_Hat + 1)/2

# save
for i in range(len(Y_np)):
    path = os.path.join(SAVE_PATH, str(i))
    x_hat_test = X_Hat[i] 
    w_hat_test = W_Hat[i] 
    x_hat_range = XG_Hat[i] 

    x_hat_test = np.clip(x_hat_test, 0,1)
    w_hat_test = np.clip(w_hat_test, 0,1)
    x_hat_range = np.clip(x_hat_range, 0,1)

    imsave(path+'_x_hat_from_test.png',  (x_hat_test*255).astype('uint8'))
    imsave(path+'_w_hat_from_test.png',  (w_hat_test/w_hat_test.max() * 255).astype('uint8'))
    imsave(path+'_x_hat_from_range.png', (x_hat_range*255).astype('uint8'))

Could you send me your script about Deblur an aligned face picture? So I can try it on my pictures.

Thanks.

axium commented 5 years ago

Why are doing np.load("zi_hat.npy")?. These are not saved weights. For each test image they must be computed. This is the reason that you may be getting celeba like images. Send me your image that you want to deblur. I will do it for you and send you the code.