I have a VIT model for object detection. The model inference speed in the tensort 8.5 environment is 190ms per frame. However when I updated to TensorRT 9.3, Inference slowed down to 250ms per frame.
I acquired the C++ dynamic library by compiling the latest Torch-TensorRT source code.
What might be causing this issue?
Environment
Build information about Torch-TensorRT can be found by turning on debug messages
Libtorch Version (e.g., 1.0): 2.2.1
CPU Architecture:
OS (e.g., Linux): ubuntu22.04
How you installed PyTorch (conda, pip, libtorch, source):
Build command you used (if compiling from source):
Are you using local sources or building from archives: Yes
❓ Question
I have a VIT model for object detection. The model inference speed in the tensort 8.5 environment is 190ms per frame. However when I updated to TensorRT 9.3, Inference slowed down to 250ms per frame.
I acquired the C++ dynamic library by compiling the latest Torch-TensorRT source code.
What might be causing this issue?
Environment
conda
,pip
,libtorch
, source):