NVIDIA / TensorRT

NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
https://developer.nvidia.com/tensorrt
Apache License 2.0
10.78k stars 2.13k forks source link

[graphShapeAnalyzer.cpp::checkCalculationStatusSanity::1916] Error Code 2: Internal Error (Assertion !isInFlight(p.second.symbolicRep) failed. ) #4126

Open jingyanwangms opened 1 month ago

jingyanwangms commented 1 month ago

Description

Environment

TensorRT Version: 10.4.0.26-1+cuda12.6 (upgrading from 10.3)

NVIDIA GPU: V100

NVIDIA Driver Version:

CUDA Version: Cuda compilation tools, release 12.5, V12.5.82 CUDNN Version: 9

Operating System:

Python Version (if applicable):

Tensorflow Version (if applicable):

PyTorch Version (if applicable):

Baremetal or Container (if so, version): nvidia/cuda:12.5.1-cudnn-devel-ubuntu20.04

Relevant Files

Model link:

Steps To Reproduce

Build onnxruntime

Commands or scripts: In built directory python onnxruntime_test_python_nested_control_flow_op.py

Have you tried the latest release?:

Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (polygraphy run <model.onnx> --onnxrt):

lix19937 commented 1 month ago

I think you can export the onnx, then use

import onnxruntime as ort
# set providers to ['TensorrtExecutionProvider', 'CUDAExecutionProvider'] with TensorrtExecutionProvider having the higher priority.

sess = ort.InferenceSession('model.onnx', providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider'])
lix19937 commented 1 month ago

See more ref https://onnxruntime.ai/docs/execution-providers/