sefibk / KernelGAN

Other
340 stars 77 forks source link

Is the estimated kernel somehow used for further training in ZSSR? #11

Closed kbhardwa closed 4 years ago

kbhardwa commented 4 years ago

I am new to this area. Very interesting work!

This is more of a clarification on the approach than an issue. I understand that once the kernel is estimated using the kernelGAN, it is supplied to existing SR techniques like ZSSR to get the final high-res output. However, ZSSR seems to have its own training functions (after a quick look at the code). Can you please explain how does this combination between kernel estimation and existing SR methods (that can input kernels directly) work? Example: I initially thought that since the kernel is already estimated, it could be somehow directly used to upscale the image...but ZSSR (from the original paper) seems to have 8 convolution layers with 64 channels each. Then, what role does the input kernel play in the training of ZSSR (how does it help existing ZSSR?)?

Thanks a lot.

sefibk commented 4 years ago

Of course we provide the kernel to the noted SR methods - that is the strength of our method. In the paper you can see ZSSR's performance without our kernel in Table 1 line 2 vs. line 11. Each SR method incorporates the kernel differently - you should read their paper to understand how. In short - ZSSR downscales the LR input image with the provided kernel - and learns to "undo" this downscaling: It trains the 8-layer network to upscale the downscaled image and recover the LR input image. (in a sense - it learns to "undo" the downscaling with the SR kernel). After the network is trained, it is applied to the input image and upscales it to the SR version w.r.t the kernel. Hope this was clear - feel free to ask if not.

kbhardwa commented 4 years ago

I see. Yes, the connection between estimating the kernel and the use of existing SR methods is clear to me now. Thanks a lot; this is very cool!