Closed yvtheja closed 6 years ago
@yvtheja
the input images are not normalized while feeding them to the generator or discriminator
The normalization is pretty much done within the computation graph itself. It is all handled by the following function, when building the graph: https://github.com/ImagingLab/Colorizing-with-GANs/blob/787daf2869ffa7da1ee1831f3f0e1b8c011f8c5f/src/models.py#L180
The RGB (non-normalized) is fed to both generator and discriminator as a prior to the conditional GAN and it gets normalized before being fed to the first layer.
the discriminator_fake is getting values which are between - 1 and 1 and discriminator_real is getting values which are between 0 to 255
The output of the generator is a tanh
function which has the range of [-1, 1], in order to make the real-image-input consistent with the fake one, the same preprocess
function is used to normalize the input to the discriminator. Here's the code to normalize the LAB color-space to [-1, 1] range.
@KamyarNazeri , I assumed preprocess function is only doing color conversion and skipped looking into the function.
Thanks a lot for your response.
Hello all,
I have seen that the input images are not normalized while feeding them to the generator or discriminator. Also, the discriminator_fake is getting values which are between - 1 and 1 and discriminator_real is getting values which are between 0 to 255. The shouldn't be trained with these variations in inputs.
Please let me know if I am missing something.
Thank you :)