cmhcbb / Seq2Sick

Adversarial examples for Seq2Seq model in NLP
39 stars 5 forks source link

RuntimeError: cudnn RNN backward can only be called in training mode #2

Open reshmajindal opened 3 years ago

reshmajindal commented 3 years ago
Loading model parameters.
/usr/local/lib/python3.7/dist-packages/torchtext/data/field.py:197: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  return Variable(arr, volatile=not train), lengths
/usr/local/lib/python3.7/dist-packages/torchtext/data/field.py:198: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  return Variable(arr, volatile=not train)
/content/Seq2Sick/onmt/translate/Translator.py:48: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  def var(a): return Variable(a, volatile=True)
/content/Seq2Sick/onmt/modules/GlobalAttention.py:179: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  align_vectors = self.sm(align.view(batch*targetL, sourceL))
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/container.py:119: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
  input = module(input)
/content/Seq2Sick/onmt/translate/Translator.py:191: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  src.volatile = False
attack.py:64: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
  output_a, attn, output_i= translator.getOutput(new_embedding, src, batch)
tensor(18.6335, device='cuda:0')     tensor(0., device='cuda:0')
tensor(999., device='cuda:0')    tensor(0., device='cuda:0')
Traceback (most recent call last):
  File "attack.py", line 312, in <module>
    main()
  File "attack.py", line 272, in main
    modifier, output_a, attn, new_word, output_i, CFLAG = attack(all_word_embedding, label_onehot, translator, src, batch, new_embedding, input_embedding, modifier, const, GROUP_LASSO, TARGETED, GRAD_REG, NN)
  File "attack.py", line 138, in attack
    loss.backward(retain_graph=True)
  File "/usr/local/lib/python3.7/dist-packages/torch/tensor.py", line 245, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
  File "/usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py", line 147, in backward
    allow_unreachable=True, accumulate_grad=True)  # allow_unreachable flag
RuntimeError: cudnn RNN backward can only be called in training mode
reshmajindal commented 3 years ago

@cmhcbb

reshmajindal commented 3 years ago

@cmhcbb Please resolve this.

ptrblck commented 3 years ago

Cross post from here with proposed workarounds here and here.

Seham-Nasr commented 11 months ago

torch.backends.cudnn.enabled=False

solved it