Closed NoAchache closed 3 years ago
Hello, I was just wondering if instead of this:
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape: .... gradients_of_generator = gen_tape.gradient(gen_loss, self.GAN.GM.trainable_variables) gradients_of_discriminator = disc_tape.gradient(disc_loss, self.GAN.D.trainable_variables)
, you could do this? :
with tf.GradientTape() as tape .... gradients_of_generator = tape.gradient(gen_loss, self.GAN.GM.trainable_variables) gradients_of_discriminator = tape.gradient(disc_loss, self.GAN.D.trainable_variables)
Would that work? In this case, is it not a waste of gpu space to use 2 tapes?
Resources held by a GradientTape are released as soon as GradientTape.gradient() is called by default, so he will need two GradientTape to compute both models gradient
Hello, I was just wondering if instead of this:
, you could do this? :
Would that work? In this case, is it not a waste of gpu space to use 2 tapes?