NVIDIA / TensorRT

NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
https://developer.nvidia.com/tensorrt
Apache License 2.0
10.72k stars 2.12k forks source link

Different versions of TensorRT get different model inference results #4209

Open demuxin opened 1 week ago

demuxin commented 1 week ago

Description

I inference the groundingDino model using C++ TensorRT.

For the same model and the same image, TensorRT 8.6 can gets the correct detection boxes.

But when I update TensorRT to 10.4, can't get detection boxes.

Possible model result error caused by TensorRT 10.4, How can I analyze this issue?

By the way, I've tried multiple versions other than 8.6 (eg 9.3, 10.0, 10.1), None of them get detection boxes.

additional information below:

I load the save onnx model via C++ TensorRT and print the information for each layer.

TensorRT 8.6 loaded a model with 21060 layers and TensorRT 10.4 loaded a model with 37921 layers, why is the difference in the number of layers so large?

rt104_layers.txt rt86_layers.txt

Environment

TensorRT Version: 8.6.1.6 / 10.4.0.26

NVIDIA GPU: GeForce RTX 3090

NVIDIA Driver Version: 535.183.06

CUDA Version: 12.2

Relevant Files

Model link: https://drive.google.com/file/d/1VRHKT7cswtDVXNUUmebbPmBSAOyd-fJN/view?usp=drive_link

yuanyao-nv commented 1 week ago

Can you please try TRT 10.5? There was a known accuracy bug that was fixed in 10.5. Thanks!

demuxin commented 3 days ago

Hi @yuanyao-nv , I tried TRT 10.5, but the model still has no output box.

How can this problem be solved?