heykeetae / Self-Attention-GAN

Pytorch implementation of Self-Attention Generative Adversarial Networks (SAGAN)
2.53k stars 475 forks source link

self.gamma*out considered as "in place" operation #62

Open NGluna03 opened 2 years ago

NGluna03 commented 2 years ago

Hi, during the backward pass i'm facing the following error: "RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1]] is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!" Specifically the error occurs at the line out= self.gammaout +x and it seams to be due to "self.gammaout". I also tried to change it to torch.mul(self.gamma, out) but it didn't solve the issue. Do you have any idea on what the alternative solutions that I could try?