taki0112 / BigGAN-Tensorflow

Simple Tensorflow implementation of "Large Scale GAN Training for High Fidelity Natural Image Synthesis" (BigGAN)
MIT License
262 stars 75 forks source link

wan-gp #8

Open yaxingwang opened 5 years ago

yaxingwang commented 5 years ago

Hi @taki0112

Thank your contribution. I am trying you code. What I using is as following:

python main.y --dataset celebs --gan_type hinge --img_size 128

which works.

But when I try python main.y --dataset celebs --gan_type wgan-gp --img_size 128 --critic_num 5

It stuck in self.d_optim = tf.train.AdamOptimizer(self.d_learning_rate, beta1=self.beta1, beta2=self.beta2).minimize(self.d_loss, var_list=d_vars)

Did you test this?

taki0112 commented 5 years ago

wgan-gp took a strangely long time. I haven't found a cause yet.

yaxingwang commented 5 years ago

Thanks. In fact wgan-lp also does not work

syning94 commented 5 years ago

I have the same problem. It seems that gradient penalty couldn't be back-propagated successfully.

As I figured out, tf.gradients() cannot calculate gradients and stuck in it. But almost all gradient penalty in WGAN-GP implemented like this.

Cannot solve this. Anyone has suggestions?

xuhui1994 commented 5 years ago

I I wonder why there is nothing in the results folder during the training phase.I look forward to hearing from you

xuhui1994 commented 5 years ago

I try to use wgan-gp , it stuck a long time. I even think it doesn't work at that time.

Orchid0714 commented 5 years ago

Hi @taki0112

Thank your contribution. I am trying you code. What I using is as following:

python main.y --dataset celebs --gan_type hinge --img_size 128

which works.

But when I try python main.y --dataset celebs --gan_type wgan-gp --img_size 128 --critic_num 5

It stuck in self.d_optim = tf.train.AdamOptimizer(self.d_learning_rate, beta1=self.beta1, beta2=self.beta2).minimize(self.d_loss, var_list=d_vars)

Did you test this?

Hi, I try this code with a small amount of data and there is a error about ResourceExhaustedError, so I want to know how to change the set in the code "gpu_device = '/gpu:0'" to use 4 gpus? thank you!

manvirvirk commented 4 years ago

Hi @taki0112

Thank your contribution. I am trying you code. What I using is as following:

python main.y --dataset celebs --gan_type hinge --img_size 128

which works.

But when I try python main.y --dataset celebs --gan_type wgan-gp --img_size 128 --critic_num 5

It stuck in self.d_optim = tf.train.AdamOptimizer(self.d_learning_rate, beta1=self.beta1, beta2=self.beta2).minimize(self.d_loss, var_list=d_vars)

Did you test this?

Hi @taki0112

Thank your contribution. I am trying you code. What I using is as following:

python main.y --dataset celebs --gan_type hinge --img_size 128

which works.

But when I try python main.y --dataset celebs --gan_type wgan-gp --img_size 128 --critic_num 5

It stuck in self.d_optim = tf.train.AdamOptimizer(self.d_learning_rate, beta1=self.beta1, beta2=self.beta2).minimize(self.d_loss, var_list=d_vars)

Did you test this?

hi i m getting memory error: Total size of variables: 198818145 Total bytes of variables: 795272580 [] Reading checkpoints... [] Failed to find a checkpoint [!] Load failed... i m using nvidia geforce rtx 6gb memory with 32 gb ram. Can you solve this??