In your paper, I notice that the C2N generator was trained on clean images from SIDD dataset, and noisy images from DND dataset. When you obtain your denoising result on SIDD and DND dataset, did you just apply the generator on the SIDD clean image, and then train the denoiser on the generated noisy and clean pairs, and finally evaluate the denoiser on both SIDD and DND dataset?
In another word, the denoising result on both DND and SIDD is obtained from the same denoiser that is trained on the same generated noisy-clean pairs.
Hi, thank you for sharing such nice work!
In your paper, I notice that the C2N generator was trained on clean images from SIDD dataset, and noisy images from DND dataset. When you obtain your denoising result on SIDD and DND dataset, did you just apply the generator on the SIDD clean image, and then train the denoiser on the generated noisy and clean pairs, and finally evaluate the denoiser on both SIDD and DND dataset?
In another word, the denoising result on both DND and SIDD is obtained from the same denoiser that is trained on the same generated noisy-clean pairs.
Many thanks!