Closed mfoglio closed 2 years ago
Don't think this is a bug the system is operating as expected but these ops are not currently supported:
ERROR: [Torch-TensorRT] - Method requested cannot be compiled by Torch-TensorRT.TorchScript.
Unsupported operators listed below:
- torchvision::nms(Tensor dets, Tensor scores, float iou_threshold) -> (Tensor)
- prim::device(Tensor a) -> (Device)
- aten::empty.memory_format(int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, int? memory_format=None) -> (Tensor)
- aten::Int.Tensor(Tensor a) -> (int)
- aten::unbind.int(Tensor(a) self, int dim=0) -> (Tensor[])
- aten::where(Tensor condition) -> (Tensor[])
- aten::meshgrid(Tensor[] tensors) -> (Tensor[])
- aten::index.Tensor(Tensor self, Tensor?[] indices) -> (Tensor)
You can either implement converters for these ops in your application or request implementation
https://www.github.com/nvidia/Torch-TensorRT/issues
We can add these ops to our backlog
@narendasan thank you for your answer. Shouldn't unsupported operation fallback to TorchScript?
FYI @mfoglio @narendasan
These operators seem to be in the AnchorGenerator
(like the aten::meshgrid
) and PostProcesse
(like the torchvision::nms
, aten::where
) parts.
I guess we could be able to use torch_tensorrt
without problems if we use the official torchscript exported by YOLOv5 (with python export.py --train
to avoid aten::meshgrid
), and we need to implement the post-processing parts by ourself.
Actually yolort adapts the design of torchvision/ssd, and the operators we adopted in yolort is a subset of torchvision/detection, they also embedded the pre-processing and post-processing parts into the torchscript graph, I guess we would face the same problems if we use the official torchscript exported by torchvision.
We can add these ops to our backlog
I think it would be great if these operators could be implemented.
Unsupported ops should fallback if you have require_full_compilation to false. I didnt look closely at your settings. Can you turn on debug logging and provide a full log? Also to work around these operators explicitly right now you can use the torchexecuted{ops/modules} to tell torch_tensorrt to always run those ops in pytorch. I would try that out to start.
Hi @narendasan , I had "require_full_compilation": False
. Did I set it incorrectly?
no that should work. Getting the full log should help us figure out what is going on
Met the same problem. truncate_long_and_double: True does not make the runtime error go away
This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days
Bug Description
I cannot convert a TorchScript module because of the error:
To Reproduce
Steps to reproduce the behavior:
Output:
Expected behavior
The model is compiled to TensorRT. I am also not sure if the setting
truncate_long_and_double
is properly received as the error suggests to enable it.Environment
conda
,pip
,libtorch
, source):Additional context
Ubuntu 18.04 Tesla T4 Python 3.6 TensorRT 8.0.1.6
Requirements: