Closed menglin0320 closed 7 years ago
Yes, see:
# Calculate the losses specific to encoder, generator, decoder
L_e = tf.clip_by_value(KL_loss*KL_param + LL_loss, -100, 100)
L_g = tf.clip_by_value(LL_loss*LL_param+G_loss*G_param, -100, 100)
and then
grads_e = opt_E.compute_gradients(L_e, var_list = E_params)
grads_g = opt_G.compute_gradients(L_g, var_list = G_params)
etc.
My bad, thanks. I'm trying to write my own version, from your experience, when will the mean square loss between real image and the reconstructed image decrease. Should it decrease on the first few epoches?
The first few epochs sounds about right. You can try running my code for a few epochs and taking a look. If you're trying to write a GAN from scratch, you should check out the Wasserstein GAN algorithm which just came out. Apparently it optimizes much better than this GAN, and the loss function actually represents construction quality. I haven't read the paper yet though.
https://arxiv.org/abs/1701.07875
Thanks for your help!
Did you optimize LL_loss?