Closed zhuyu-cs closed 2 years ago
My architecture used in Neighbor2Neighbor is the standard U-Net followed by three convolutional layers just as selfsupervised-denoising implemented in Pytorch. The channel sizes are also keep the same. Since the PSNRs are really high for all test sets, I think it may be because of different code implementations for the evaluation process. Your can run baseline methods(N2C, N2N) for Gaussian noise (std \in [5, 50]) to see what will happen.
My architecture used in Neighbor2Neighbor is the standard U-Net followed by three convolutional layers just as selfsupervised-denoising implemented in Pytorch. The channel sizes are also keep the same. Since the PSNRs are really high for all test sets, I think it may be because of different code implementations for the evaluation process. Your can run baseline methods(N2C, N2N) for Gaussian noise (std \in [5, 50]) to see what will happen.
Thanks for your kind reply.
Hi, Tao: Thanks for your great work. I meet some challenges when I reimplement the results in Neighbor2Neighbor. Specifically, I strictly followed the modified UNet in https://github.com/NVlabs/selfsupervised-denoising and implemented training code in Pytorch. But I can't reproduce the results in Table 1. For Gaussian noise (std: 5-50), the PSNR in BSD300, Set14, and KODAK are respectively 31.18, 31.2, 32.44. Since I have strictly followed the experimental setup reported in Neighbor2Neighbor. I guess your architecture may be a little different from the aforementioned modified UNet. If possible, I would like to be clear about the difference or be able to know some other details.