Closed zhangqianhui closed 7 years ago
That code looks correct. It's hard to say without seeing the rest of the code, but if you point me to the repo I can try and debug.
This repo , https://github.com/zhangqianhui/wgan-gp-debug
Thank you !
The sample of gerneration in epoch 4 , iter= 12601
https://github.com/zhangqianhui/wgan-gp-debug/blob/master/sample/train_04_12601.png
in epoch 6, iter= 20401
https://github.com/zhangqianhui/wgan-gp-debug/blob/master/sample/train_06_20401.png
Could you share the learning curve? (I.e. negative of the critic's loss)
That doesn't look good. @igul222 did you ever see something like that?
Could you share the full code?
Best :) Martin
@martinarjovsky Whose code ?
Yours!
Don't know whether it is related but, in my experiments of wgan-gp the loss of G becomes negative, which is different from original wgan in which loss of G is generally positive. Is that normal?
https://github.com/zhangqianhui/wgan-gp-debug @martinarjovsky
@igul222 @martinarjovsky hello , have you found the reason about the face generation for lower quality?
Hi! I haven't looked at the code yet. Can you run ishaan's code (the one on this repo) and see if it gives the same results?
@martinarjovsky OK
@martinarjovsky But , his code have not trained in celeba dataset , so Which architecture i need to use ? Is it Ok to use gan_64x64.py and dcgan's architecure?
That should be fine.
I test in this project, and it can generate very realistic face image after training in the celeba data-set. But,I can't find reason that my code don't work the same as it.
Here are some differences I found between your implementation and ours which might be responsible:
Gan.py#141
: You train the critic for 100 iters every 500 steps. We don't do this, and it's probably responsible for the spikes in the loss curve. Try removing it.Gan.py#61
: It looks like self.images
and self.fake_images
both have shape [self.batch_size, 64, 64, self.channel]
. In this case, alpha
should have shape [self.batch_size, 1, 1, 1]
, and also reduction_indices
on line 67 should be [1,2,3]
.Hope this helps!
@igul222 Thanks , I have solved this problem.!
Cool! What was the issue?
@martinarjovsky igul222: Gan.py#61: It looks like self.images and self.fake_images both have shape [self.batch_size, 64, 64, self.channel]. In this case, alpha should have shape [self.batch_size, 1, 1, 1], and also reduction_indices on line 67 should be [1,2,3].
and my nextbatch() also have som problem.
And I think layer normalization is very important.
Thanks! @igul222 @martinarjovsky
Hi @zhangqianhui Im new to WGAN-GP
Im wondering that if we define G_Loss as g_loss = -D(Fake_Image)
, we're expecting a loss to converge and minimize to a number in loss curve and a maximized loss curve while define G_Loss as g_loss = D(Fake_Image)
?
In C_Loss(Critic loss), if we define C_Loss as c_loss = D(Fake_Image)-D(Real_Image)
does that mean we're expecting a loss to converge and minimize to a number and c_loss = D(Fake_Image)-D(Real_Image)
is what you called "negative of critic loss"?
the critic loss should be negative, because the critic loss means the negative of the divergence between the real samples distribution and fake samples distribution. You should read the wgan-gp paper for more details. And g_loss= - c_loss = - D(fake_image) + D(real_image), but the gradient of D(real_images) will not affect g network, so, g_loss= -D(fake_image)
@timho102003
Doing Classification after using training wgan?
Actually I've done the multi-task on Discriminator to not only determine the real/fake problem but also classify Identities and other informations from face which has already come up with a good performance on the protocol such as MPIE. So far I'm trying on merging the WGAN to the implementation. In my network, there's a average pooling(the output is Bx320x1x1)before a fc(in my previous version, fully connected layer do the multi-task work such as real/fake, Identity...). I take away the real/fake work from fully connected layer and use "view" function to reshape the dimension of average pooling(Bx320x1x1->BX320) and serve as the output of discriminator while calculating Wasserstein Loss. From my previous experience, I feel like the model is training on the right direction. However the negative critic loss didn't start from large negative number and finally reach 0 thru training.
Generated Image (20epoch): Number of critic=5 while training, LR=0.0001 https://imgur.com/a/QVizP Negative Critics Curve : - D(fake_image) + D(real_image): https://imgur.com/a/jLn3q D Wasserstein Loss: D(fake_image) - D(real_image) +Gradient Penalty https://imgur.com/a/G5Wvl G Wasserstein Loss: - D(fake_image) https://imgur.com/a/F535n
hi,now i am also trying to train on CelebA (cropped and resized to 64x64) in WGAN-GP mode . I just modify the DATA_DIR in gan_64x64.py. But there was a mistake like this: IOError: [Errno 2] No such file or directory: '/data-4T-B/yelu/data/dcgan-completion.tensorflow/aligned/img_align_celeba_png/train_64x64/train_64x64/0927649.png' Could you show me your code? thanks so much~~~
I test the wgan-gp in the celeba dataset. But the quality of the generative images is worse than the original dcgan. and i just change the below code in the basic of w-gan using dcgan generator and discirmator.
And the reason?