Open zhihongp opened 3 years ago
Hi @zhihongp, please refer to this link for our supplemental material.
Hi @zhihongp, please refer to this link for our supplemental material.
@LongguangWang Thanks, that's very helpful. Did you use DAN's pre-trained model for comparison? Based on the code here (https://github.com/greatlog/DAN/blob/f040d10a54eba8dcf0b1c1ec72b6691c1717d52a/codes/config/IKC/utils/util.py#L479), DAN applies a bicubic-downsampling after gaussian blur (line #524-535) just like your main model, not using the direct gaussian-downsample as in your supplemental test
Hi @zhihongp. We have also noticed that the degradation model mentioned in the paper of DAN (s-fold downsampling in Eq. 1) is different from the code (bicubic downsampling). Therefore, we test DAN on bicubic downsampling using their pre-trained model and include the results in the evaluation on s-fold downsampling to make it consistent with the original paper.
@LongguangWang Good catch on that. A bit confused w/ your answer though. Do you mean in Table III of your supplemental, results of DAN were tested on g. blur+bicubic downsampling data, while others tested with s-fold downsampling data? Shouldn't pre-trained DAN be compared in the tests in the main paper then?
Another confusing part is, not your paper but blindSR in general. For g. blur+bicubic downsampling models (IKC, DAN and yours), g. kernel width is sigma (or standard deviation), but for s-fold models (KernelGAN-DIV2KRK, USRNet, FKP), variance is used (as lambda in code: https://github.com/JingyunLiang/FKP/blob/5ca3f7c39ba849408993cec64341e2639063984f/data/prepare_dataset.py#L50).
@zhihongp Sorry for the late response.
As mentioned in your paper, you "re-trained our DASR using their degradation model and provide the results in the supplemental material" for comparisons with DAN/USRNet. Could you share these supplemental materials? Thanks