sunreef / BlindSR

Independent implementation of the algorithm from the paper "Blind Image Super-Resolution with Spatially Variant Degradations"
GNU General Public License v3.0
59 stars 9 forks source link

Confusing about image downsampling operations #5

Open yuanjunchai opened 4 years ago

yuanjunchai commented 4 years ago

Hi! I am very curious about your BlindSR and read your code. What confused me is your downsampling operation in 'src/degradation.py'. It seems that you do not use bicubic downsample operation. Could you tell me why? And the benefit of your own method about downsampling? Thank you!

def apply(self, img, scale=SCALE_FACTOR):

weights = torch.zeros(3,3,self.kernel_size, self.kernel_size) if img.is_cuda: weights = weights.cuda() self.cuda()

self.build_kernel()

    for c in range(3):
        weights[c, c, :, :] = self.kernel
    conv_img = conv2d(img[None], weights)

    scale_factor = int(scale)
    lr_img = conv_img[0, :, ::scale_factor, ::scale_factor]
sunreef commented 4 years ago

The goal of our paper is to be able to adapt to more general degradations than the bicubic downsampling operator. To do this, we selected a more general class of kernels (anisotropic Gaussian) and we perform the degradation by convolving our high-res image with one of these kernels before downsampling (taking one pixel out of two for example).

It is not necessary to use bicubic downsampling as that would result in an additional convolution that modifes the type of degradation.