Open Lvhhhh opened 3 years ago
Hey @Lvhhhh did you find a fix to this problem? I'm facing a similar issue
@Lvhhhh please put the self.enc_c_opt.step() , self.enc_a_opt.step() and self.gen_opt.step() in function update_EG(self) after self.backward_G_alone()
Traceback (most recent call last):
File "E:\Python\Unsupervised-Domain-Specific-Deblurring-master\src\train.py", line 86, in
@saber131421
def update_EG(self):
# update G, Ec, Ea
self.enc_c_opt.zero_grad()
self.enc_a_opt.zero_grad()
self.gen_opt.zero_grad()
self.backward_EG()
self.enc_c_opt.step()
self.enc_a_opt.step()
self.gen_opt.step()
# update G, Ec
self.enc_c_opt.zero_grad()
self.gen_opt.zero_grad()
self.backward_G_alone()
self.enc_c_opt.step()
self.enc_a_opt.step()
self.gen_opt.step()
As you said, I made the following changes, but it still doesn't work
@Lvhhhh please put the self.enc_c_opt.step() , self.enc_a_opt.step() and self.gen_opt.step() in function update_EG(self) after self.backward_G_alone()
thank you very much!it's working
Hi @HelloWorldYYYYY and @saber131421 Did u fix this issue? Although I followed the above suggestion, there is still the issue.
Hi @Devil-Ideal Could u share your updated code?
self.gen_opt.step()
like this: def update_EG(self):
self.enc_c_opt.zero_grad()
self.enc_a_opt.zero_grad()
self.gen_opt.zero_grad()
self.backward_EG()
# self.enc_c_opt.step()
# self.enc_a_opt.step()
# self.gen_opt.step()
# update G, Ec
self.enc_c_opt.zero_grad()
self.gen_opt.zero_grad()
self.backward_G_alone()
self.enc_a_opt.step()
self.enc_c_opt.step()
self.gen_opt.step()
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [80, 3, 1, 1]] is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True). python-BaseException