hiyoung123 / SoftMaskedBert

Soft-Masked Bert 复现论文:https://arxiv.org/pdf/2005.07421.pdf
255 stars 47 forks source link

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [768]] is at version 3; expected version 2 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. #23

Open cjjjy opened 3 years ago

cjjjy commented 3 years ago

你好,我运行该代码时出现如下错误。。。我的pytorch版本是1.8.1,这个错误是和pytorch版本有关吗?

EP_train:0: 0%|| 1/900 [00:00<04:19, 3.46it/s] Traceback (most recent call last): File "/Soft-mask/train.py", line 209, in trainer.train(train, e) File "/Soft-mask/train.py", line 39, in train return self.iteration(epoch, train_data) File "/Soft-mask/train.py", line 98, in iteration loss.backward(retain_graph=True) File "/miniconda3/envs/py_36/lib/python3.6/site-packages/torch/tensor.py", line 245, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/miniconda3/envs/py_36/lib/python3.6/site-packages/torch/autograd/init.py", line 147, in backward allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [768]] is at version 3; expected version 2 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

lmw0320 commented 3 years ago

好像是版本的问题,用pytorch 1.4.0就可以了。

baojunshan commented 3 years ago

https://github.com/hiyoung123/SoftMaskedBert/issues/16#issuecomment-889605572

这样应该就可以了,inplace问题

currenttime commented 3 years ago

跟inplace无关。降低pytorch版本就可以了,之前用1.8也是这个问题,降低到1.2.0就解决了