Closed nishanthballal-9 closed 4 years ago
Of course, you can use cross entrophy (original DCGAN). Because I used mse, I read LSGAN (Least Square GAN). this paper said that least square type loss function is more stable in training process. Please check LSGAN paper : https://arxiv.org/abs/1611.04076
From my own experience, Conv2dTranspose is better than upsampling. I tought the reason is conv makes higher non linearity than upsampling. upsampling is not trainable... well.. I am not sure that clear reason.
Why are we using 'mse' as the loss function for both generator and discrimator? Do we not use 'binary_crossentropy' in case of the optimizers?
Also another doubt was to know the reason behind the usage of Conv2dTranspose layers instead of Upsampling layers?