Closed netuserjun closed 2 years ago
@netuserjun could you elaborate more on why SampleInt8 cannot making proper calibration file.
, the sample used standard interface to dump the calibration cache, see https://github.com/NVIDIA/TensorRT/blob/main/samples/common/EntropyCalibrator.h#L81
Also from your attached code, I see you used library called odtk::Engine
, maybe this library require some calibration cache format that not TRT standard, could you also take a check? thanks!
Closing due to >14 days without activity. Please feel free to reopen if the issue still exists. Thanks
Description
I have only FP16 onnx file of NVIDIA the stanford_resnext50.onnx from deepstream sdk. Now I'm trying to make int8 calibration cache of this model for making the FPS more faster. the trtexec and SampleInt8 cannot making proper calibration file. I got calibration cache anyway but the model is not working. This is the code making int8 engine file with tensorrt
Is there any suggestion?
Environment
TensorRT Version: 7.1(Jetpack 4.5.1) NVIDIA GPU: Jetson AGX Xavier NVIDIA Driver Version: CUDA Version: 10.2 CUDNN Version: Operating System: Python Version (if applicable): Tensorflow Version (if applicable): PyTorch Version (if applicable): Baremetal or Container (if so, version):