eriklindernoren / Keras-GAN

Keras implementations of Generative Adversarial Networks.
MIT License
9.18k stars 3.14k forks source link

The labels of valid and fake in wgan #136

Closed Pandarenlql closed 5 years ago

Pandarenlql commented 5 years ago

Hello, I have a question about the labels of valid and fake. Generally, we need set the valid array are filled of 1, and the fake array are filled of 0. So I don't know why in wgan, author set the valid -1 and the fake 1. I will appreciate for everyone who help me. # Adversarial ground truths valid = -np.ones((batch_size, 1)) fake = np.ones((batch_size, 1))

MmDawN commented 5 years ago

Hello, have you solved the question?

MmDawN commented 5 years ago

I have searched the offical Keras implementation of WGAN_GP in the repositories https://github.com/keras-team/keras-contrib/blob/master/examples/improved_wgan.py#L300 ,and they set the valid to 1 and the fake to -1. I thought this might be the right inplementation of setting labels.

Pandarenlql commented 5 years ago

Thanks for your answer, I have solved the question. A few days ago, I found a file introduced why they set the valid to 1 and the fake to -1. But I can't find the website now. In my understanding, if we set the valid to 1 and the fake to -1, then we can use the code K.mean(y_true * y_pred) to get the result of wasserstein distance easily. Through these settings, we just need some multiplications and addition operations to get the wasserstein distance. 无标题

adaxidedakaonang commented 5 years ago

Hello, I have some confuse about D valid is set to 1, I guess this is to get w-distance easily instead of getting the probability of real data which is 0 to 1. Is this right?

Pandarenlql commented 5 years ago

Sorry for ignore your question, I had a competition the other days, so I have no time to answer you. In my understanding, author make the valid -1 and fake 1 can give the negative x~Pr and positive x~Pg, then we use `d_loss_real = self.critic.train_on_batch(imgs, valid) d_loss_fake = self.critic.train_on_batch(gen_imgs, fake)

d_loss = 0.5 * np.add(d_loss_fake, d_lossreal)` to get the -W(Pr, Pg), so we can through minimize the -W(Pr, Pg)_ to get the maximum of W(Pr, Pg).

adaxidedakaonang commented 5 years ago

Thank you~