ozanciga / dirt-t

A DIRT-T Approach to Unsupervised Domain Adaptation for Pytorch
32 stars 5 forks source link

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation #3

Closed PotatoThanh closed 3 years ago

PotatoThanh commented 3 years ago

Hi,

when I run code with pytorch 1.8. I got this error

File "/home/thanhndv/phd/dirt-t/vada_train.py", line 109, in loss_main.backward() File "/opt/miniconda3/envs/torch/lib/python3.8/site-packages/torch/tensor.py", line 245, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/opt/miniconda3/envs/torch/lib/python3.8/site-packages/torch/autograd/init.py", line 145, in backward Variable._execution_engine.run_backward( RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [100, 1]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

Newbeeer commented 3 years ago

Hi,

I find out that moving the code block

    # Update discriminator.
    optimizer_disc.zero_grad()
    loss_disc.backward()
    optimizer_disc.step()

under line 72 solves the problem.