igul222 / improved_wgan_training

Code for reproducing experiments in "Improved Training of Wasserstein GANs"
MIT License
2.35k stars 670 forks source link

WGan-gp test in the Celeba dataset. #3

Closed zhangqianhui closed 7 years ago

zhangqianhui commented 7 years ago

I test the wgan-gp in the celeba dataset. But the quality of the generative images is worse than the original dcgan. and i just change the below code in the basic of w-gan using dcgan generator and discirmator.


#gradient penalty
differences = self.fake_images - self.images
 alpha = tf.random_uniform(shape=[self.batch_size, 1], minval=0., maxval=1.)
 interpolates = self.images + (alpha*differences)
gradients = tf.gradients(self.critic(interpolates, True), [interpolates])[0]
 ##2 norm
 slopes = tf.sqrt(tf.reduce_sum(tf.square(gradients), reduction_indices=[1]))
 gradient_penalty = tf.reduce_mean((slopes - 1.)**2)

And the reason?

igul222 commented 7 years ago

That code looks correct. It's hard to say without seeing the rest of the code, but if you point me to the repo I can try and debug.

zhangqianhui commented 7 years ago

This repo , https://github.com/zhangqianhui/wgan-gp-debug

Thank you !

zhangqianhui commented 7 years ago

The sample of gerneration in epoch 4 , iter= 12601

https://github.com/zhangqianhui/wgan-gp-debug/blob/master/sample/train_04_12601.png

in epoch 6, iter= 20401

https://github.com/zhangqianhui/wgan-gp-debug/blob/master/sample/train_06_20401.png

martinarjovsky commented 7 years ago

Could you share the learning curve? (I.e. negative of the critic's loss)

zhangqianhui commented 7 years ago

the curve: https://github.com/zhangqianhui/wgan-gp-debug/blob/master/sample/curve.png

martinarjovsky commented 7 years ago

That doesn't look good. @igul222 did you ever see something like that?

Could you share the full code?

Best :) Martin

zhangqianhui commented 7 years ago

@martinarjovsky Whose code ?

martinarjovsky commented 7 years ago

Yours!

wchen342 commented 7 years ago

Don't know whether it is related but, in my experiments of wgan-gp the loss of G becomes negative, which is different from original wgan in which loss of G is generally positive. Is that normal?

zhangqianhui commented 7 years ago

https://github.com/zhangqianhui/wgan-gp-debug @martinarjovsky

zhangqianhui commented 7 years ago

@igul222 @martinarjovsky hello , have you found the reason about the face generation for lower quality?

martinarjovsky commented 7 years ago

Hi! I haven't looked at the code yet. Can you run ishaan's code (the one on this repo) and see if it gives the same results?

zhangqianhui commented 7 years ago

@martinarjovsky OK

zhangqianhui commented 7 years ago

@martinarjovsky But , his code have not trained in celeba dataset , so Which architecture i need to use ? Is it Ok to use gan_64x64.py and dcgan's architecure?

martinarjovsky commented 7 years ago

That should be fine.

zhangqianhui commented 7 years ago

I test in this project, and it can generate very realistic face image after training in the celeba data-set. But,I can't find reason that my code don't work the same as it.

igul222 commented 7 years ago

Here are some differences I found between your implementation and ours which might be responsible:

Hope this helps!

zhangqianhui commented 7 years ago

@igul222 Thanks , I have solved this problem.!

martinarjovsky commented 7 years ago

Cool! What was the issue?

zhangqianhui commented 7 years ago

@martinarjovsky igul222: Gan.py#61: It looks like self.images and self.fake_images both have shape [self.batch_size, 64, 64, self.channel]. In this case, alpha should have shape [self.batch_size, 1, 1, 1], and also reduction_indices on line 67 should be [1,2,3].

zhangqianhui commented 7 years ago

and my nextbatch() also have som problem.

zhangqianhui commented 7 years ago

And I think layer normalization is very important.

Thanks! @igul222 @martinarjovsky

timho102003 commented 6 years ago

Hi @zhangqianhui Im new to WGAN-GP Im wondering that if we define G_Loss as g_loss = -D(Fake_Image), we're expecting a loss to converge and minimize to a number in loss curve and a maximized loss curve while define G_Loss as g_loss = D(Fake_Image)? In C_Loss(Critic loss), if we define C_Loss as c_loss = D(Fake_Image)-D(Real_Image) does that mean we're expecting a loss to converge and minimize to a number and c_loss = D(Fake_Image)-D(Real_Image) is what you called "negative of critic loss"?

zhangqianhui commented 6 years ago

the critic loss should be negative, because the critic loss means the negative of the divergence between the real samples distribution and fake samples distribution. You should read the wgan-gp paper for more details. And g_loss= - c_loss = - D(fake_image) + D(real_image), but the gradient of D(real_images) will not affect g network, so, g_loss= -D(fake_image)

zhangqianhui commented 6 years ago

@timho102003

zhangqianhui commented 6 years ago

Doing Classification after using training wgan?

timho102003 commented 6 years ago

Actually I've done the multi-task on Discriminator to not only determine the real/fake problem but also classify Identities and other informations from face which has already come up with a good performance on the protocol such as MPIE. So far I'm trying on merging the WGAN to the implementation. In my network, there's a average pooling(the output is Bx320x1x1)before a fc(in my previous version, fully connected layer do the multi-task work such as real/fake, Identity...). I take away the real/fake work from fully connected layer and use "view" function to reshape the dimension of average pooling(Bx320x1x1->BX320) and serve as the output of discriminator while calculating Wasserstein Loss. From my previous experience, I feel like the model is training on the right direction. However the negative critic loss didn't start from large negative number and finally reach 0 thru training.

Generated Image (20epoch): Number of critic=5 while training, LR=0.0001 https://imgur.com/a/QVizP Negative Critics Curve : - D(fake_image) + D(real_image): https://imgur.com/a/jLn3q D Wasserstein Loss: D(fake_image) - D(real_image) +Gradient Penalty https://imgur.com/a/G5Wvl G Wasserstein Loss: - D(fake_image) https://imgur.com/a/F535n

y601757692l commented 6 years ago

hi,now i am also trying to train on CelebA (cropped and resized to 64x64) in WGAN-GP mode . I just modify the DATA_DIR in gan_64x64.py. But there was a mistake like this: IOError: [Errno 2] No such file or directory: '/data-4T-B/yelu/data/dcgan-completion.tensorflow/aligned/img_align_celeba_png/train_64x64/train_64x64/0927649.png' Could you show me your code? thanks so much~~~