yinboc / liif

Learning Continuous Image Representation with Local Implicit Image Function, in CVPR 2021 (Oral)
https://yinboc.github.io/liif/
BSD 3-Clause "New" or "Revised" License
1.27k stars 145 forks source link

Something about Quick Start #13

Closed zzhwfy closed 3 years ago

zzhwfy commented 3 years ago

I use the same 32x32 LR image in your paper and then run quick start to get 20x SR image, but it looks quite blurred TAT. Here is my result: input output


*running command: python demo.py --input input.png --model rdn-liif.pth --resolution 640,640 --output output.png --gpu 2

yinboc commented 3 years ago

That is interesting, could you describe how did you obtain your image? It should work if you try this image: input

This image is generated by croping and bicubic resizing from an image (0857.png) in the div2k validation set (i.e. unseen during training). A potential reason could be that your image is "out of training distribution" for some reason, thus the performance is much worse. The "paper-version" LIIF is only trained with bicubic down-sampling, while it is expected to work for the most of natural images, it may be sensitive to artifacts or noises especially for small images.

Mofafa commented 3 years ago

I got similar results. @yinboc Could you please upload some example low res images? I am wondering if the MLP weights are really saved in the public weights.

yinboc commented 3 years ago

I got similar results. @yinboc Could you please upload some example low res images? I am wondering if the MLP weights are really saved in the public weights.

The models should work for general natural images, some example inputs can be found on the project website. Did you "right click">"save image as">"input.png" for the image I posted above (NOT the one posted by zzhwfy)? It should work well for both edsr-baseline-liif.pth and rdn-liif.pth if you correctly followed the instructions. If not, could you provide your detailed environment and running steps?

zzhwfy commented 3 years ago

@yinboc Thanks a lot! I get the same result after I use the LR image posted above. The LR image I used before is generated by nearest resizing in PS from the image mo-flower.png in your website and I think this is the key point. Due to the different degradation process, my LR image has lost more high-frequency information. Interestingly, I've compared two LR images and just slight pixel-level differences (PSNR: 36.1042; SSIM: 0.9928). but when we super-resolve them, especially in such a large scaling factor (20X), any small difference can lead to enormous decline in visual quality. Besides this, I use nearest resizing in Matlab to test, the generated LR image is closer (PSNR: 41.5147; SSIM: 0.9991), but the SR image is also quite different. NTIRE21 SR challenge introduces LR-PSNR to evaluate SR model, I think it does reflect someting important. And the sensitive nature of CNN makes current SR model difficult to handle real scene SR.

Thank you for your outstanding work! XD ———————————————————————————————————————————————— Result: (use the same model)

output-originLR

output-PS

output-ML

yinboc commented 3 years ago

Happy to hear that you solved this issue. The down-sampled image could be different to a true natural image, since it may have artifacts or noises which make it out of distribution (and I believe there should be some SR work focusing on this). Thanks for reporting this interesting observation!