Open johnzlli opened 10 months ago
@narendasan Hi, is there any update?
Hello - as an update on this issue, a workaround to try is to compile with ir="torch_compile"
and specify torch._dynamo.config.allow_rnn = True
at the top of the script.
Regarding the ir="dynamo"
path, there is a workaround as specified here: https://github.com/pytorch/pytorch/issues/121761#issuecomment-2021696208, which can then be used with Torch-TensorRT by passing the gm
object into the .compile
call.
A more robust fix is pending resolution to these related issues: [pytorch/pytorch/issues/120626, pytorch/pytorch/issues/121761]
Bug Description
Encountered error as follow when using Torch-TensorRT to convert torch.nn.LSTM in docker image nvcr.io/nvidia/pytorch:23.12-py3 : NotImplementedError: aten::_cudnn_rnn_flatten_weight: attempted to run this operator with Meta tensors, but there was no abstract impl or Meta kernel registered. You may have run into this message while using an operator with PT2 compilation APIs (torch.compile/torch.export); in order to use this operator with those APIs you'll need to add an abstract impl
To Reproduce
example code:
Expected behavior
Environment
conda
,pip
,libtorch
, source):Additional context