greatlog / DAN

This is an official implementation of Unfolding the Alternating Optimization for Blind Super Resolution
231 stars 41 forks source link

About methods comparison #37

Closed YuqiangY closed 2 years ago

YuqiangY commented 2 years ago

Thanks for your nice work! I am a beginner in blind sr, so I do not understand why methods like EDSR are trained under the bicubic downsampling setting while tested under the multiple-degradation setting. Looking forward to your reply!

greatlog commented 2 years ago

We reuse the statics of IKC and KernelGAN. In fact, for a fair comparison, all methods should be retrained under the same setting.

YuqiangY commented 2 years ago

We reuse the statics of IKC and KernelGAN. In fact, for a fair comparison, all methods should be retrained under the same setting.

Did you have ever retrain those methods? Maybe end2end methods (such as EDSR) are better under the same setting? As far as I know, none of the existing blind sr methods are compared with them under the same settings

greatlog commented 2 years ago

Some recent works have studied this problem. This paper has demonstrated that a simple end-to-end network can also achieve competitive performance.