ustclby / Unsupervised-Domain-Specific-Deblurring

Implementation of "Unsupervised Domain-Specific Deblurring via Disentangled Representations"
109 stars 28 forks source link

this code have inplace bug #12

Open Lvhhhh opened 3 years ago

Lvhhhh commented 3 years ago

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [80, 3, 1, 1]] is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True). python-BaseException

JayKarhade commented 3 years ago

Hey @Lvhhhh did you find a fix to this problem? I'm facing a similar issue

saber131421 commented 2 years ago

@Lvhhhh please put the self.enc_c_opt.step() , self.enc_a_opt.step() and self.gen_opt.step() in function update_EG(self) after self.backward_G_alone()

HelloWorldYYYYY commented 2 years ago

Traceback (most recent call last): File "E:\Python\Unsupervised-Domain-Specific-Deblurring-master\src\train.py", line 86, in main() File "E:\Python\Unsupervised-Domain-Specific-Deblurring-master\src\train.py", line 60, in main model.update_EG() File "E:\Python\Unsupervised-Domain-Specific-Deblurring-master\src\model.py", line 226, in update_EG self.backward_G_alone() File "E:\Python\Unsupervised-Domain-Specific-Deblurring-master\src\model.py", line 303, in backward_G_alone loss_G2.backward() File "E:\anaconda3\envs\pytorch\lib\site-packages\torch_tensor.py", line 396, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "E:\anaconda3\envs\pytorch\lib\site-packages\torch\autograd__init__.py", line 173, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [256, 8]], which is output 0 of AsStridedBackward0, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

HelloWorldYYYYY commented 2 years ago

@saber131421

 def update_EG(self):
    # update G, Ec, Ea
    self.enc_c_opt.zero_grad()
    self.enc_a_opt.zero_grad()
    self.gen_opt.zero_grad()
    self.backward_EG()
    self.enc_c_opt.step()
    self.enc_a_opt.step()
    self.gen_opt.step()

    # update G, Ec
    self.enc_c_opt.zero_grad()
    self.gen_opt.zero_grad()
    self.backward_G_alone()
    self.enc_c_opt.step()
    self.enc_a_opt.step()
    self.gen_opt.step()

As you said, I made the following changes, but it still doesn't work

Devil-Ideal commented 2 years ago

@Lvhhhh please put the self.enc_c_opt.step() , self.enc_a_opt.step() and self.gen_opt.step() in function update_EG(self) after self.backward_G_alone()

thank you very much!it's working

rose-jinyang commented 1 year ago

Hi @HelloWorldYYYYY and @saber131421 Did u fix this issue? Although I followed the above suggestion, there is still the issue.

rose-jinyang commented 1 year ago

Hi @Devil-Ideal Could u share your updated code?

zhoubin-zb commented 2 months ago

self.gen_opt.step()

like this: def update_EG(self):

update G, Ec, Ea

    self.enc_c_opt.zero_grad()
    self.enc_a_opt.zero_grad()
    self.gen_opt.zero_grad()
    self.backward_EG()
    # self.enc_c_opt.step()
    # self.enc_a_opt.step()
    # self.gen_opt.step()

    # update G, Ec
    self.enc_c_opt.zero_grad()
    self.gen_opt.zero_grad()
    self.backward_G_alone()
    self.enc_a_opt.step()
    self.enc_c_opt.step()
    self.gen_opt.step()