hardmaru / resnet-cppn-gan-tensorflow

Using Residual Generative Adversarial Networks and Variational Auto-encoder techniques to produce high resolution images.
125 stars 39 forks source link

Not an issue, just a question #1

Open menglin0320 opened 7 years ago

menglin0320 commented 7 years ago

tf.clip_by_global_norm(tf.gradients(self.d_loss_real, self.d_vars), self.grad_clip)\ why you use clip_by_global_norm, what do you want to avoid.

I changed your code to make a special version of adversarial autoencoder, but the reconstru_mean uderflows, do you have any suggestion for solving the problem

hardmaru commented 7 years ago

Hi menglin0320

Thanks for the message. If you want to clip by absolute value, rather than global norm, you can just replace the code to tf.clip_by_value, like in this example:

https://github.com/OlavHN/bnlstm/blob/master/test.py

It probably won't make much of a difference. I find tf.clip_by_value generally shortens the training time a bit, while the other is a little bit more stable.

What do you mean by uderflows? is the reconstruction mean becoming a NaN? I don't see the variable "reconstru_mean" in model.py as well.

menglin0320 commented 7 years ago

Sorry, my bad. You said you look at the variational autoencoder written by Jan Hendrik Metzen. x_reconstr_mean is the reconstructed image by variational autoencoder. And you are right, mean becoming a NaN. I used tensorboard to draw the min of the reconstructed image, it decreases rapidly and eventually underflows. Do you have any advise for how you avoid it. I didn't read your generator function because I only want to write an encoder that combines vae and gan

hardmaru commented 7 years ago

Okay, I see, so you want to implement a version of vae + gan, based of the normal pixel input, like from Jan's tutorial? Hmm I never had any problems with the code on his tutorial. One thing I had to tweak was adjusting a small positive epsilon in tf.log() to make sure it doesn't blow up.

If you are just trying to make a version of vae+gan on small pixel images (like MNIST), maybe try to look here for a reference implementation? https://github.com/ikostrikov/TensorFlow-VAE-GAN-DRAW

menglin0320 commented 7 years ago

The problem is that those values that should be zero keep decreasing and getting close to 0, on his code, I see that one pixel of the reconstructed image is already 1.37205149e-30, I feel if we use more epochs, it will have the same problem. I guess I should clip the gradients (add threshold for small value)to avoid it