Open wmyw96 opened 6 years ago
The result images shown in the main body of the paper are generated with WGAN. After the acceptance of the paper, I did some further experiments and found that the original GAN loss actually help improves the quality of output images, for the datasets used in the paper. Though I cannot conclude that the traditional GAN loss is always better than wGAN loss in terms of the DualGAN architecture.
If you have interests in further exploration, you may try WGAN-GP or other losses.
On 11 July 2018 at 16:28, Yihong Gu notifications@github.com wrote:
I read your code about the design for loss and found your implementation is different from that proposed in the paper. So you use the traditional loss of GAN instead of the WGAN loss? Does it mean WGAN loss might not be a good choice in practice?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/duxingren14/DualGAN/issues/22, or mute the thread https://github.com/notifications/unsubscribe-auth/ACc6e6kdp3q45BB2Lvksqqnz_PcINmmRks5uFooggaJpZM4VL_lS .
Why is noise z not used? @duxingren14
Instead of explicitly adding random noise, I used dropout instead.
"def preprocess_img(img, img_size=128, flip=False, is_test=False): img = scipy.misc.imresize(img, [img_size, img_size]) if (not is_test) and flip and np.random.random() > 0.5: img = np.fliplr(img) return img" You did that? @duxingren14
I read your code about the design for loss and found your implementation is different from that proposed in the paper. So you use the traditional loss of GAN instead of the WGAN loss? Does it mean WGAN loss might not be a good choice in practice?