I would like to ask a step concerning the "3.1 Experiment step" in the paper.
In the paper, you stated that "G uses clean images from DIV2K [59], Flickr2K [60], BSD68 [61], Kodak24 [62], and Urban100 [63] to generate realistic noisy-clean image pairs". I assume this step is in the (b) training phrase and the generated (fake) noisy images and corresponding real noisy images will be inputted into two discriminator to train them.
However, some of the datasets (Flickr2K, Kodak24, Urban100) you used in this phase don't contain noisy images. I don't know how could the training of the two discriminator be done in the phase. Or you just add gaussian noise to these three dataset and regard them as real noisy images.
Hi,
Thanks for your amazing work.
I would like to ask a step concerning the "3.1 Experiment step" in the paper.
In the paper, you stated that "G uses clean images from DIV2K [59], Flickr2K [60], BSD68 [61], Kodak24 [62], and Urban100 [63] to generate realistic noisy-clean image pairs". I assume this step is in the (b) training phrase and the generated (fake) noisy images and corresponding real noisy images will be inputted into two discriminator to train them.
However, some of the datasets (Flickr2K, Kodak24, Urban100) you used in this phase don't contain noisy images. I don't know how could the training of the two discriminator be done in the phase. Or you just add gaussian noise to these three dataset and regard them as real noisy images.
Thanks in advance for the clarification.