Open anewusername77 opened 3 years ago
The FactorVAE as implemented here has this issue. I believe it is because of the way PyTorch Lightning calls the loss_function
as the self.D_Z_reserve
variable is updated by the model keeps track of the gradients for it before it was again updated.
I think this can be easily rectified with the latent pytorch lightning version, where they have improved a lot of stuff.
Using detach()
removes any gradients that has been tracked. So when you call backward()
, it does nothing to the loss that uses self.D_z_reserve.detach()
. So, I wouldn't recommend using that.
Still having this issue. All other implementations have the same issue.
Has anyone solved it? Is it right to detach as suggested?
The FactorVAE as implemented here has this issue. I believe it is because of the way PyTorch Lightning calls the
loss_function
as theself.D_Z_reserve
variable is updated by the model keeps track of the gradients for it before it was again updated.I think this can be easily rectified with the latent pytorch lightning version, where they have improved a lot of stuff.
Using
detach()
removes any gradients that has been tracked. So when you callbackward()
, it does nothing to the loss that usesself.D_z_reserve.detach()
. So, I wouldn't recommend using that.
the error message is:
one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [4096, 6]], which is output 0 of TBackward, is at version 2; expected version 1 instead.
it seems to be the problem that a term(
self.D_z_reserve
) used in D_tc_loss calculated at vae_loss stage was modified somehow.giving details: I calculated and updated vae loss first, like:
then when updating discriminator:
the error message occurs as discribed at beginning.
when I delete term
F.cross_entropy(self.D_z_reserve, false_labels)
in D_tc_loss, or change D_tc_loss intoeverything goes alright. but I'm not sure if use
.detach()
here is fine, and wondering what exact problem it is, waiting for you reply, thanks a lot.