Closed huangzhikun1995 closed 4 years ago
Which version of PyTorch are you using?
We have tested it with pytorch=1.1.0, but we haven't tested for the newest version of PyTorch.
Thank you. I had this problem with pytorch=1.5.0, but it disappeared with pytorch=1.1.0.
Thank you for your great work. But I had some problems when I training a new model.
Traceback (most recent call last): File "run.py", line 152, in <module> main() File "run.py", line 126, in main model.update_EG() File "/data1/LADN-master/src/model.py", line 697, in update_EG self.backward_G_alone() File "/data1/LADN-master/src/model.py", line 677, in backward_G_alone loss_z_L1.backward() File "/usr/local/miniconda3/lib/python3.6/site-packages/torch/tensor.py", line 198, in back ward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/usr/local/miniconda3/lib/python3.6/site-packages/torch/autograd/__init__.py", line 1 00, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: one of the variables needed for gradient computation has been modified by an in place operation: [torch.cuda.FloatTensor [128]] is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, wit h torch.autograd.set_detect_anomaly(True).
Didn't you have this problem when you were training your models? I set inplace=False innn.ReLU and nn.LeakyReLU, but it still didn't work.