limuloo / PyDIff

[IJCAI 2023 ORAL] "Pyramid Diffusion Models For Low-light Image Enhancement" (Official Implementation)
Other
147 stars 8 forks source link

Some questions about the restored results. #6

Open JianghaiSCU opened 1 year ago

JianghaiSCU commented 1 year ago

Hi, I'm encountering some confusions in reproducing the results on the LOL dataset, when I set the “gt_root” and “input_root” of the dataset in the "infer.yaml" to be the same, I find that the illumination of the output results is not enhanced, whereas when the ”gt_root” is set to be the path of reference images, the results turn out to be normal. Is it necessary to have a reference image in the inference stage to get the restored normal-light result? But in real scenes we can only get low-light images, so how to test it on the unpaired dataset?

test

limuloo commented 1 year ago

Hi, I'm encountering some confusions in reproducing the results on the LOL dataset, when I set the “gt_root” and “input_root” of the dataset in the "infer.yaml" to be the same, I find that the illumination of the output results is not enhanced, whereas when the ”gt_root” is set to be the path of reference images, the results turn out to be normal. Is it necessary to have a reference image in the inference stage to get the restored normal-light result? But in real scenes we can only get low-light images, so how to test it on the unpaired dataset?

test

Your problem arises mainly because of this:

(1)Low-light enhancement is a ill-posed problem where there are multiple reasonable understandings of different brightnesses for one low-light image. For a better comparison, many SOTA methods in the current LOL benchmark (such as the second-ranked LLFLOW) adjust the brightness of the image output by the network to be consistent with GT to compare the restoration of texture details.

(2)In the code, I have adjusted the image output by the network to the brightness of the GT image according to the average value (this process will not affect the generated texture details, it is just the simplest method to adjust the overall illumination, and it is also easy to adjust according to user preferences in practical applications). To sum up, in your case, we will adjust the image output by the network to the brightness of low light, so the situation you said will appear (that is, low light is GT). For details, please refer to the code: https://github.com/limuloo/PyDIff/blob/4144b89e6ba5557b782a376839fc2874738da8fb/PyDiff/pydiff/models/pydiff_model.py#L216

Resolution: When testing on your own dataset, set 'use_kind_align' in 'infer.yaml' to false. The same goes for training.

JianghaiSCU commented 1 year ago

Thanks for your quickly reply

JianghaiSCU commented 1 year ago

I have another question, based on your reply, so do we just need to use normal-light images captured in any scene as "GT" when testing on the unpaired data?

limuloo commented 1 year ago

I have another question, based on your reply, so do we just need to use normal-light images captured in any scene as "GT" when testing on the unpaired data?

What you said is also a solution. But you don't actually need the normal-light image as a reference. you can inference in two steps. (1) set 'use_kind_align' in 'infer.yaml' to false. (2) Set "gt_root" to the same value as "input_root".

JianghaiSCU commented 1 year ago

Thanks, according to my reproduced results, by setting "gt_root" to the same value as "input_root" would cause similar results with input, I will try the first one.

lixiang4659 commented 8 months ago

谢谢,根据我重现的结果,通过将“gt_root”设置为与“input_root”相同的值将导致与输入类似的结果,我将尝试第一个。

Hello, have you found a solution to this problem. I tried to set 'use_kind_align' in 'infer.yaml' to false and Set "gt_root" to the same value as "input_root",but the results are not good.