Open Source Image and Video Restoration Toolbox for Super-resolution, Denoise, Deblurring, etc. Currently, it includes EDSR, RCAN, SRResNet, SRGAN, ESRGAN, EDVR, BasicVSR, SwinIR, ECBSR, etc. Also support StyleGAN2, DFDNet.
I'm trying to visualize the result presented in paper 'Understanding Deformable Alignment in Video Super-Resolution'. According to the paper, I expect the offset will be similar to optical flow. However, I visualized the offset in Layer 1, the number is much smaller than optical flow.
Here is what I did:
I used the pretrained EDVR model 'EDVR_L_x4_SR_Vimeo90K_official-162b54e4.pth' and visualized the L1 offset right before the cascading part. The offset tensor has shape (1, 128, h, w) and I reshaped it to (1, 2, 8=group_number, 8=kernal_h*kernal_w, h, w), and randomly pick one (2, h, w) to visualize.
The offsets should be re-grouped. You can contact Kelvin for the corresponding scripts.
There are several dcns in EDVR. After warping several times by previous dcns, the offsets are supposed to be small in the cascading part. You can train a new network with one dcn and then visualize its offsets.
Hi, thanks for your great work!
I'm trying to visualize the result presented in paper 'Understanding Deformable Alignment in Video Super-Resolution'. According to the paper, I expect the offset will be similar to optical flow. However, I visualized the offset in Layer 1, the number is much smaller than optical flow.
Here is what I did: I used the pretrained EDVR model 'EDVR_L_x4_SR_Vimeo90K_official-162b54e4.pth' and visualized the L1 offset right before the cascading part. The offset tensor has shape (1, 128, h, w) and I reshaped it to (1, 2, 8=group_number, 8=kernal_h*kernal_w, h, w), and randomly pick one (2, h, w) to visualize.
By my observation, I didn't see a big difference between L1 offset and the cascading offset mentioned in https://github.com/XPixelGroup/BasicSR/issues/404.
Can you help me with that? thanks