JingyunLiang / MANet

Official PyTorch code for Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution (MANet, ICCV2021)
https://arxiv.org/abs/2108.05302
Apache License 2.0
169 stars 23 forks source link

Question about the prepare_testset #11

Open mrgreen3325 opened 3 years ago

mrgreen3325 commented 3 years ago

Hi, thanks for your work. I find a problem when I run the prepare_testset.yml There are several outputs for the same input with different sig1, sig2, theta setting. May I know which one should be used to train my sr model or which can have the best quality? Thanks.

JingyunLiang commented 3 years ago

As indicated by the name, prepare_testset.yml is used for generating testing set. For training, the HR-LR pairs are generated one-the-fly, which means only HR path is required.

mrgreen3325 commented 3 years ago

As indicated by the name, prepare_testset.yml is used for generating testing set. For training, the HR-LR pairs are generated one-the-fly, which means only HR path is required.

Thanks. Yes, I want to generate the HR-LR pair for my training program. May I know which setting of this HR-LR pair should I use in my training?

JingyunLiang commented 3 years ago

We generate HR-LR pairs on-the-fly. The parameters on degradation are set at https://github.com/JingyunLiang/MANet/blob/34f90ba8888f4a1dd2a1127b97c2ec3706f06598/codes/options/train/train_stage1.yml#L10

mrgreen3325 commented 3 years ago

We generate HR-LR pairs on-the-fly. The parameters on degradation are set at

https://github.com/JingyunLiang/MANet/blob/34f90ba8888f4a1dd2a1127b97c2ec3706f06598/codes/options/train/train_stage1.yml#L10

Thanks for reply. I follow the train_stage1.yml setting to config prepare_testset.yml to process 4x downscale as

name: 001_MANet_prepare_dataset
suffix: ~
model: blind
distortion: sr
scale: ~
gpu_ids: [6]
kernel_size: 21
code_length: 15
sig_min: 0.7
sig_max: 10.0
sig: 1.6
sig1: 6
sig2: 1
theta: 0
rate_iso: 0 # 1 for iso, 0 for aniso
sv_mode: ~
test_noise: False
noise: 15

datasets:
  test1:
    name: Set5
    mode: GT
    dataroot_GT: ../datasets/toy_dataset/HR_si
    dataroot_LQ: ~

network_G:
  which_model_G: MANet_s1
  in_nc: 3
  out_nc: ~
  nf: ~
  nb: ~
  upscale: 0
#
path:
  strict_load: true
  pretrain_model_G: ../experiments/pretrained_models

However, the prepare_testset still produce so many different version of downscale LR. Is that I miss something?

JingyunLiang commented 3 years ago

As shown in the Read.me. There are three settings.

1, for training, use train_stage*.yml. It will generate HR-LR pairs on the fly.

2, for testing, use prepare_testset.yml. It will generate and save different versions of LR (with different degradations) for testing.

3, for testing, you can also use test_stage3.yml. It will generate HR-LR testing pairs on the fly, but all of them follow the same degradation (different from case 2).