The-Learning-And-Vision-Atelier-LAVA / DASR

[CVPR 2021] Unsupervised Degradation Representation Learning for Blind Super-Resolution
MIT License
395 stars 50 forks source link

supplemental tests available? #6

Open zhihongp opened 3 years ago

zhihongp commented 3 years ago

As mentioned in your paper, you "re-trained our DASR using their degradation model and provide the results in the supplemental material" for comparisons with DAN/USRNet. Could you share these supplemental materials? Thanks

LongguangWang commented 3 years ago

Hi @zhihongp, please refer to this link for our supplemental material.

zhihongp commented 3 years ago

Hi @zhihongp, please refer to this link for our supplemental material.

@LongguangWang Thanks, that's very helpful. Did you use DAN's pre-trained model for comparison? Based on the code here (https://github.com/greatlog/DAN/blob/f040d10a54eba8dcf0b1c1ec72b6691c1717d52a/codes/config/IKC/utils/util.py#L479), DAN applies a bicubic-downsampling after gaussian blur (line #524-535) just like your main model, not using the direct gaussian-downsample as in your supplemental test

LongguangWang commented 3 years ago

Hi @zhihongp. We have also noticed that the degradation model mentioned in the paper of DAN (s-fold downsampling in Eq. 1) is different from the code (bicubic downsampling). Therefore, we test DAN on bicubic downsampling using their pre-trained model and include the results in the evaluation on s-fold downsampling to make it consistent with the original paper.

zhihongp commented 3 years ago

@LongguangWang Good catch on that. A bit confused w/ your answer though. Do you mean in Table III of your supplemental, results of DAN were tested on g. blur+bicubic downsampling data, while others tested with s-fold downsampling data? Shouldn't pre-trained DAN be compared in the tests in the main paper then?

Another confusing part is, not your paper but blindSR in general. For g. blur+bicubic downsampling models (IKC, DAN and yours), g. kernel width is sigma (or standard deviation), but for s-fold models (KernelGAN-DIV2KRK, USRNet, FKP), variance is used (as lambda in code: https://github.com/JingyunLiang/FKP/blob/5ca3f7c39ba849408993cec64341e2639063984f/data/prepare_dataset.py#L50).

LongguangWang commented 3 years ago

@zhihongp Sorry for the late response.

  1. Yes. To keep consistent with the paper of DAN, we use the code of DAN as a black box for evaluation and included the result in the comparison on s-fold downsampling.
  2. Good catch on this difference. It seems different definitions of blur kernels are used in the pioneering works and later works just follows their codes. A unified definition of blur kernels is required for fair and easy comparison.