Open yaxingwang opened 5 years ago
wgan-gp took a strangely long time. I haven't found a cause yet.
Thanks. In fact wgan-lp also does not work
I have the same problem. It seems that gradient penalty couldn't be back-propagated successfully.
As I figured out, tf.gradients()
cannot calculate gradients and stuck in it. But almost all gradient penalty in WGAN-GP implemented like this.
Cannot solve this. Anyone has suggestions?
I I wonder why there is nothing in the results folder during the training phase.I look forward to hearing from you
I try to use wgan-gp , it stuck a long time. I even think it doesn't work at that time.
Hi @taki0112
Thank your contribution. I am trying you code. What I using is as following:
python main.y --dataset celebs --gan_type hinge --img_size 128
which works.
But when I try python main.y --dataset celebs --gan_type wgan-gp --img_size 128 --critic_num 5
It stuck in self.d_optim = tf.train.AdamOptimizer(self.d_learning_rate, beta1=self.beta1, beta2=self.beta2).minimize(self.d_loss, var_list=d_vars)
Did you test this?
Hi, I try this code with a small amount of data and there is a error about ResourceExhaustedError, so I want to know how to change the set in the code "gpu_device = '/gpu:0'" to use 4 gpus? thank you!
Hi @taki0112
Thank your contribution. I am trying you code. What I using is as following:
python main.y --dataset celebs --gan_type hinge --img_size 128
which works.
But when I try python main.y --dataset celebs --gan_type wgan-gp --img_size 128 --critic_num 5
It stuck in self.d_optim = tf.train.AdamOptimizer(self.d_learning_rate, beta1=self.beta1, beta2=self.beta2).minimize(self.d_loss, var_list=d_vars)
Did you test this?
Hi @taki0112
Thank your contribution. I am trying you code. What I using is as following:
python main.y --dataset celebs --gan_type hinge --img_size 128
which works.
But when I try python main.y --dataset celebs --gan_type wgan-gp --img_size 128 --critic_num 5
It stuck in self.d_optim = tf.train.AdamOptimizer(self.d_learning_rate, beta1=self.beta1, beta2=self.beta2).minimize(self.d_loss, var_list=d_vars)
Did you test this?
hi i m getting memory error: Total size of variables: 198818145 Total bytes of variables: 795272580 [] Reading checkpoints... [] Failed to find a checkpoint [!] Load failed... i m using nvidia geforce rtx 6gb memory with 32 gb ram. Can you solve this??
Hi @taki0112
Thank your contribution. I am trying you code. What I using is as following:
python main.y --dataset celebs --gan_type hinge --img_size 128
which works.
But when I try python main.y --dataset celebs --gan_type wgan-gp --img_size 128 --critic_num 5
It stuck in self.d_optim = tf.train.AdamOptimizer(self.d_learning_rate, beta1=self.beta1, beta2=self.beta2).minimize(self.d_loss, var_list=d_vars)
Did you test this?