NVIDIA / TensorRT

NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
https://developer.nvidia.com/tensorrt
Apache License 2.0
10.61k stars 2.11k forks source link

Onnx to tensorrt conversion for input layer: cast uint8 to fp32 #4131

Open maxlacourchristensen opened 2 weeks ago

maxlacourchristensen commented 2 weeks ago

My pytorch and onnx model has an uint8 to fp32 cast layer which divides by 255. This cast layer is applied to the input tensor. When i convert the onnx model to tensorrt INT8 i get the following warning:

“Missing scale and zero-point for tensor input, expect fall back to non-int8 implementation for any layer consuming or producing given tensor”

For INT8 should i remove the cast layer before exporting the onnx model or does tensorrt deal with it itself? What is the recommended approach for best INT8 performance?

Platform is Jetson Orin AGX, Xavier NX and Orin NX

lix19937 commented 2 weeks ago

Similar case https://github.com/NVIDIA/TensorRT/issues/3959