I have a YOLOv8 detection model deployed using .NET with TensorRT as the provider. However, when I execute it, I encounter an error. Previously, when using TensorRT version 8, there were no issues.
ErrorCode:ShapeInferenceNotRegistered] Non-zero status code returned while running TRTKernel_graph_torch_jit_4528351051880633562_0 node. Name:'TensorrtExecutionProvider_TRTKernel_graph_torch_jit_4528351051880633562_0_0' Status Message: TensorRT EP failed to create engine from network.
Environment
TensorRT Version: 10.2.0.19
ONNX-TensorRT Version / Branch: https://github.com/onnx/onnx-tensorrt/archive/06adf4461ac84035bee658c6cf5df39f7ab6071d.zipGPU Type:
Nvidia Driver Version: 550.100
CUDA Version: 12.5.1
CUDNN Version: 9.2.1
Operating System + Version: ubuntu20.04
Python Version (if applicable): 3.9
TensorFlow + TF2ONNX Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Description
I have a YOLOv8 detection model deployed using .NET with TensorRT as the provider. However, when I execute it, I encounter an error. Previously, when using TensorRT version 8, there were no issues.
https://github.com/microsoft/onnxruntime/issues/21415
ErrorCode:ShapeInferenceNotRegistered] Non-zero status code returned while running TRTKernel_graph_torch_jit_4528351051880633562_0 node. Name:'TensorrtExecutionProvider_TRTKernel_graph_torch_jit_4528351051880633562_0_0' Status Message: TensorRT EP failed to create engine from network.
Environment
TensorRT Version: 10.2.0.19 ONNX-TensorRT Version / Branch: https://github.com/onnx/onnx-tensorrt/archive/06adf4461ac84035bee658c6cf5df39f7ab6071d.zip GPU Type: Nvidia Driver Version: 550.100 CUDA Version: 12.5.1 CUDNN Version: 9.2.1 Operating System + Version: ubuntu20.04 Python Version (if applicable): 3.9 TensorFlow + TF2ONNX Version (if applicable): PyTorch Version (if applicable): Baremetal or Container (if container which image + tag):
GPU
NVIDIA Quadro P6000 (Pascal)
Relevant Files
Steps To Reproduce