csdwren / SelfDeblur

Neural Blind Deconvolution Using Deep Priors (CVPR 2020)
336 stars 68 forks source link

How to get color results? #3

Open Xingyb14 opened 5 years ago

Xingyb14 commented 5 years ago

Hi~ Thanks for your work. I ran SelfDeblur on my own blur images and get gray results. It seems that the SelfDeblur is performed on Y channel. So can you provide the code coverting the results to RGB? Looking forward to it, thank you!

csdwren commented 5 years ago

Thanks. The deblurred Y channel is used to replace Y component of blurry image (YCbCr space), which is then converted back to RGB space. You can try it.

I will update it along with a new way to directly deblur RGB images.

JingyunLiang commented 4 years ago

I didn't find where do you transform the image from RGB space to YCbCr space. In the calculation of loss, out_y ([1,1,128,128]) and y([1,3,128,128]) are directly input into the MSE loss .

total_loss = mse(out_y, y) + tv_loss(out_x)

From the source code of mse, out_y will be broadcasted to be [1,3,128,128].

expanded_input, expanded_target = torch.broadcast_tensors(input, target)
ret = torch._C._nn.mse_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction))

Therefore, my question is, since the network is not trained with the supervision of Y component of blurry image, how can we expect out_y to be the Y component of deblurred image? @ @ @csdwren

Uhall commented 4 years ago

I have the same question.

csdwren commented 4 years ago

https://github.com/csdwren/SelfDeblur/blob/master/selfdeblur_ycbcr.py has been updated to handle color images. Also the code has been improved with better robustness. Thanks.

TenMiss commented 2 years ago

In your paper, you used the non-blind deconvolution method [14], i.e. "D. Krishnan and R. Fergus. Fast image deconvolution using hyper-laplacian priors. InNIPS, 2009.". I found that the input image of this non-blind deconvolution method I found on the web must be a grayscale image, and the output image must be a grayscale image, but my final deblurring result is to get the restored color image. This problem has been bothering me for a long time. I am wondering if you could kindly send me the source program and necessary information of this part. I promise they will be used only for research purposed. Thank you very much for your kind consideration and I am looking forward to your early reply.

TenMiss commented 2 years ago

Thank you for your work. I mainly want to know how to turn the deconvolution result using "D. Krishnan and R. Fergus. Fast image deconvolution using hyper-laplacian priors. InNIPS, 2009." into a color clear image. Thank you very much !!!

csdwren commented 2 years ago

You can refer to the code 'selfdeblur_ycbcr.py' An RGB image is converted to YCbCr, and the deconvolution result only process Y channel. Finally re-converted YCbCr to RGB.

TenMiss commented 2 years ago

Can the results obtained by this method be directly described as those obtained by using reference [14]? Is this equivalent?

---Original--- From: @.> Date: Thu, Nov 18, 2021 17:41 PM To: @.>; Cc: @.**@.>; Subject: Re: [csdwren/SelfDeblur] How to get color results? (#3)

You can refer to the code 'selfdeblur_ycbcr.py' An RGB image is converted to YCbCr, and the deconvolution result only process Y channel. Finally re-converted YCbCr to RGB.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android.