YunzhuLi / InfoGAIL

[NIPS 2017] InfoGAIL: Interpretable Imitation Learning from Visual Demonstrations
MIT License
175 stars 63 forks source link

one question about clip #20

Open Kailiangdong opened 4 years ago

Kailiangdong commented 4 years ago

In the original paper of wasserstein gan, the weight has to be clipped between [-c, c] However, I see in the code. The gradient has been clipped. Why could be gradient not weight? Thank you in advanced.

self.gradients = gradients = tf.gradients(loss, self.network.var_list) clipped_gradients = hgail.misc.tf_utils.clip_gradients( gradients, self.grad_norm_rescale, self.grad_norm_clip)

self.global_step = tf.Variable(0, name='critic/global_step', trainable=False) self.train_op = self.optimizer.apply_gradients([(g,v) for (g,v) in zip(clipped_gradients, self.network.var_list)], global_step=self.global_step)

YunzhuLi commented 4 years ago

Where in the code are you referring to? I clip the weight of the discriminator here, and I set the clip range here.

WGAN-GP is an improved version of WGAN, which applies a penalty term on the gradient instead of doing weight clipping. The following blog post should be helpful in showing the difference. https://medium.com/@jonathan_hui/gan-wasserstein-gan-wgan-gp-6a1a2aa1b490