NVIDIA / TensorRT

NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
https://developer.nvidia.com/tensorrt
Apache License 2.0
10.65k stars 2.12k forks source link

I have a quick question about a method of unloading shared library #3917

Open Wooho-Moon opened 4 months ago

Wooho-Moon commented 4 months ago

Description



It is not an error. However, I don't want to load shared library. Specifically, when I excute a file which is based on c++ and use trt engine for object detection. I am able to look at following log.

[V] Trying to load shared library libcudnn.so.8 [V] Loaded shared library libcudnn.so.8 [V] Using cuDNN as plugin tactic source [V] Using cuDNN as core library tactic source [I] [MemUsageChange] Init cuDNN: CPU +617, GPU +586, now: CPU 927, GPU 4940 (MiB)

I don't want to load cuDNN. Could you give me an advice? oh, I cannot rebuild tenssorRT without cuDNN. What I only can do is to refine the source code releated to my inferece code of the obejct detection.

Environment

TensorRT Version: 8.5.1.2 NVIDIA GPU: jetson nx NVIDIA Driver Version:

CUDA Version: 11.4 CUDNN Version: 8.6

Operating System:

Python Version (if applicable): 3.8

PyTorch Version (if applicable): 1.11

Relevant Files

Model link:

Steps To Reproduce

Commands or scripts:

Have you tried the latest release?:

Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (polygraphy run <model.onnx> --onnxrt):

lix19937 commented 4 months ago

--tacticSources=-CUDNN to close cuDNN as TacticSource.

Wooho-Moon commented 4 months ago

--tacticSources=-CUDNN to close cuDNN as TacticSource.

Thanks to answer. But, I don't know exactly what the opion you mentioned is. could you explain in detail?

lix19937 commented 4 months ago

If you use trtexec, you can add this para. @Wooho-Moon