Closed fedral closed 3 years ago
hi, the idea of KernelGAN "estimating degradation kernel only with LR images and then generateing same distribution training-pairs to improve model generalization ability" is briliant. And it is really suitable in real scenario where train dataset is mostly unpaired.
I have done a simple simulation test based on your code: Setting: 4 clean HR images from DIV2K are bluerd and downsampled with the same rotated gaussian kernel ( gt kernel ), and run your code to estimate the kernel on generated LR images.
generated LR:
the estimated kernel :
I found the estimated kernel are not consistent with simulated GT kernel, and the estimate kernel seems random. Hope you could give some suggestion.
Besides, does kernelGAN support 1X, 1.5X SR in theory?A little bit confused.
I find the same problem, and feel confused with the codes as well.
hi, the idea of KernelGAN "estimating degradation kernel only with LR images and then generateing same distribution training-pairs to improve model generalization ability" is briliant. And it is really suitable in real scenario where train dataset is mostly unpaired. I have done a simple simulation test based on your code: Setting: 4 clean HR images from DIV2K are bluerd and downsampled with the same rotated gaussian kernel ( gt kernel ), and run your code to estimate the kernel on generated LR images. generated LR:
the estimated kernel :
I found the estimated kernel are not consistent with simulated GT kernel, and the estimate kernel seems random. Hope you could give some suggestion. Besides, does kernelGAN support 1X, 1.5X SR in theory?A little bit confused.
I find the same problem, and feel confused with the codes as well.
the training details are not released in the paper, so I used default parameters setting. But from my experience of training GAN(PatchGAN/ESRGAN), it is really hard to say 3000 iterations is enough for image-specific kernelGAN to converage.
Thanks for the compliments.
(a) In your paper, figure 5 and figure 6, the estimated kernels are perfectly aligned with simulated GT gaussian kernels.
To be frank, the reproducibility of the experiment results based on this offical code needs carefull reexamination, if random variance also exist in your own experiments listed on the paper.
(b) In ZSSR,the author argued that "using internal patches could provide stronger prediction power rather than using external image patches" [ illustrated in paper figure-3 as below ]
But this arguement is not rigorously self-sustained.
1)for real super-resolution cases, LR images are heavily corrupted. Missing structure, distortation, untrival types of noise and color fading etc exsit. There is little help even with internal patches during SR completion, theoretically speaking. 2)if the training set is large enough, external patches could collect aboudant "tiny handraills" of same semantic information from external patch, right?
So, the assumption ZSSR based might be useful only for silghtly corrupted image super-resolution.
I am aware of the stability problems of KernelGAN. Unfortunately I was unable to solve them entirely. This is a research work rather than a production level product therefore it aims to suggest a different approach to solve SR. Reproducibility of the average performance can be achieved "easily" whereas for a single image there is definitely variance.
regarding (a) - The examples are far from "perfectly aligned" - KernelGAN estimation and the GT are definitely not identical! in continuation to what I said about being a combination of another kernel with the downscaling one. regarding (b) - ZSSR is not my work. I would rather not speak for it. Feel free to ask questions about it in its Github repo
Thanks for your time! Appreciated!
hi, the idea of KernelGAN "estimating degradation kernel only with LR images and then generateing same distribution training-pairs to improve model generalization ability" is briliant. And it is really suitable in real scenario where train dataset is mostly unpaired.
I have done a simple simulation test based on your code: Setting: 4 clean HR images from DIV2K are bluerd and downsampled with the same rotated gaussian kernel ( gt kernel ), and run your code to estimate the kernel on generated LR images.
generated LR:![image](https://user-images.githubusercontent.com/13436512/90845818-6d365480-e399-11ea-8f3e-c37ffbd2066e.png)
the estimated kernel :![image](https://user-images.githubusercontent.com/13436512/90845834-76272600-e399-11ea-966f-2203b7dad713.png)
I found the estimated kernel are not consistent with simulated GT kernel, and the estimate kernel seems random. Hope you could give some suggestion.
Besides, does kernelGAN support 1X, 1.5X SR in theory?A little bit confused.