JimmyChame / SADNet

Pytorch code for "Spatial-Adaptive Network for Single Image Denoising"
128 stars 12 forks source link

A kindly reminder #5

Closed yarqian closed 3 years ago

yarqian commented 4 years ago

Here is a kindly reminder. It seems that the result on sidd validation set is wrong duo to the image value range to computing the PSNR. As I tested the model, the PSNR would be 39.534db when the range is [0,1], but become 39.278db when the range is [0, 255]. There is a gap between two results. As far as I knowm, the range for the sidd bench mark to computing PSNR is [0, 255]. Many thanks.

JimmyChame commented 4 years ago

Hi, Thank you for the reminder. Actually, you can save the denoised images, such as in PNG format. Then you can see that the range of the image doesn't matter when calculating the PSNR on the saved images. In our paper, the PSNR on SIDD are measured by this way.

yarqian commented 4 years ago

I have tested that images are saved as PNG format, but still got the same result. Saving the images does not chage the result. If you upload the denoised image on sidd test to the the sidd website, you can get very similar result with the sidd validation result computed in range [0,255]. So in my opinion, the result on sidd dataset in the paper is wrong, and it should be computed in range [0,255]. It is just personal suggustion. Many thanks.

JimmyChame commented 4 years ago

I am a little surprised. When I use the codes: clean = imread('gt_img.png') denoised = imread('denoised_img.png') psnr = compare_psnr(clean, denoised) and clean = imread('gt_img.png').astype(np.float32) / 255 denoised = imread('denoised_img.png').astype(np.float32) / 255 psnr = compare_psnr(clean, denoised) I can obtain the same PSNR values. When saving the denoised images as PNG format, the images have been quantified to uint8. Therefore, when the image is read in, it defaults to a range of 0 to 255.

yarqian commented 4 years ago

Mnay thanks for your suggustion. I got it. The result in the paper is 39.46 db. But I still got the results with PSNR 39.278 db when saving the denoised images in both range [0,1] and [0,255] and also directly computing the PSNR by quantifing the images to uint8. I used the codes listed bellow. Is it possible that i miss some important codes for model inference `
mat_file = scio.loadmat('data/ValidationNoisyBlocksSrgb.mat') data = mat_file['ValidationNoisyBlocksSrgb'] mat_file = scio.loadmat('data/ValidationGtBlocksSrgb.mat') gt = mat_file['ValidationGtBlocksSrgb'] psnr = [] for p in range(40): for q in range(32):

image

        img = data[p, q, :, :, :]
        gt_img = gt[p, q, :, :, :]
        img = np.array(img)
        gt_img = np.array(gt_img)
        input = transforms.ToTensor()(img)
        gt_i = transforms.ToTensor()(gt_img)
        gt_i = gt_i.unsqueeze(0)
        gt_i = gt_i.cuda()
        input = input.unsqueeze(0)
        input = input.cuda()
        out = torch.Tensor(input.size()).cuda()
        with torch.no_grad():  # this can save much memory
            torch.cuda.synchronize()
            start = time.time()
            out = model(input)
            torch.cuda.synchronize()
            end = time.time()
            ave_time = ave_time + end - start
            out = torch.clamp(out, 0., 1.) * 255
            out_img = out.squeeze(0)
            out_img = out.squeeze(0).cpu().numpy()
            out_img = out_img.astype('uint8')
            out_img = np.transpose(out_img, (1, 2, 0))

            gt_i = gt_i* 255
            gt_i = gt_i.squeeze(0).cpu().numpy()
            gt_i = gt_i.astype('uint8')
            gt_i = np.transpose(gt_i, (1, 2, 0))
            denoised_name = os.path.join(model_dir, 'denoised%d_%d.png'%(p+1, q+1))
            cv2.imwrite(denoised_name, out_img[:,:,::-1])
            gt_name = os.path.join(model_dir, 'gt%d_%d.png'%(p+1, q+1))
            cv2.imwrite(gt_name, gt_i[:,:,::-1])
            psnr1 = compare_psnr(gt_i,out_img, data_range = 255)
            psnr.append(psnr1)

`

JimmyChame commented 3 years ago

Hi, I guess that you may use the old version of our pretrained models. Your can try the latest shared pretraining model to make a test please.