duxingren14 / DualGAN

DualGAN-tensorflow: tensorflow implementation of DualGAN
Apache License 2.0
265 stars 97 forks source link

About the training loss #22

Open wmyw96 opened 6 years ago

wmyw96 commented 6 years ago

I read your code about the design for loss and found your implementation is different from that proposed in the paper. So you use the traditional loss of GAN instead of the WGAN loss? Does it mean WGAN loss might not be a good choice in practice?

duxingren14 commented 6 years ago

The result images shown in the main body of the paper are generated with WGAN. After the acceptance of the paper, I did some further experiments and found that the original GAN loss actually help improves the quality of output images, for the datasets used in the paper. Though I cannot conclude that the traditional GAN loss is always better than wGAN loss in terms of the DualGAN architecture.

If you have interests in further exploration, you may try WGAN-GP or other losses.

On 11 July 2018 at 16:28, Yihong Gu notifications@github.com wrote:

I read your code about the design for loss and found your implementation is different from that proposed in the paper. So you use the traditional loss of GAN instead of the WGAN loss? Does it mean WGAN loss might not be a good choice in practice?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/duxingren14/DualGAN/issues/22, or mute the thread https://github.com/notifications/unsubscribe-auth/ACc6e6kdp3q45BB2Lvksqqnz_PcINmmRks5uFooggaJpZM4VL_lS .

chl916185 commented 6 years ago

Why is noise z not used? @duxingren14

duxingren14 commented 6 years ago

Instead of explicitly adding random noise, I used dropout instead.

chl916185 commented 6 years ago

"def preprocess_img(img, img_size=128, flip=False, is_test=False): img = scipy.misc.imresize(img, [img_size, img_size]) if (not is_test) and flip and np.random.random() > 0.5: img = np.fliplr(img) return img" You did that? @duxingren14