Hi, thanks for your great work!
But I get some problems when I am running watermodel.py. This confuses me for a long time. I don't know how to solve it.
Traceback (most recent call last): File "UnderWaterZeroShot.py", line 168, in test(args)
File "UnderWaterZeroShot.py", line 155, in test
loss.backward(retain_graph=True) File "/mnt/petrelfs/zhangyiting/anaconda3/envs/nerf-pytorch/lib/python3.8/site-packages/torch/_tensor.py", line 363, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/mnt/petrelfs/zhangyiting/anaconda3/envs/nerf-pytorch/lib/python3.8/site-packages/torch/autograd/init.py", line 173, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward passRuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 64, 64, 64]], which is output 0 of ReluBackward0, is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
Hi, thanks for your great work! But I get some problems when I am running watermodel.py. This confuses me for a long time. I don't know how to solve it.
Traceback (most recent call last): File "UnderWaterZeroShot.py", line 168, in test(args)
File "UnderWaterZeroShot.py", line 155, in test
loss.backward(retain_graph=True) File "/mnt/petrelfs/zhangyiting/anaconda3/envs/nerf-pytorch/lib/python3.8/site-packages/torch/_tensor.py", line 363, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/mnt/petrelfs/zhangyiting/anaconda3/envs/nerf-pytorch/lib/python3.8/site-packages/torch/autograd/init.py", line 173, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward passRuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 64, 64, 64]], which is output 0 of ReluBackward0, is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!