Open NannilaJagadees opened 1 month ago
@NannilaJagadees You can upload the full log about when you use calib table to generate engine .
But I am able to generate engine file from trtexec without the calib file, with this command and didn't face any issues.
It is highly likely that there is a problem with your calib program.
You can try to close --explicit-batch
.
Further more, you can use Polygraphy to calib, https://github.com/NVIDIA/TensorRT/tree/release/10.3/tools/Polygraphy/examples/api/04_int8_calibration_in_tensorrt
Hi @lix19937
I tried using polygraphy but faced the same issue
Description
I am using this calibration script to generate the calib cache file for Segformer onnx model. But facing this issue while generating calib cache.
But I am able to generate engine file from trtexec without the calib file, with this command and didn't face any issues.
trtexec --onnx=segformer.onnx --saveEngine=segformer.engine --int8 --useCudaGraph --dumpLayerInfo --profilingVerbosity=detailed
How can we calibration for this model?
Environment
TensorRT Version: 8.6.2.3
NVIDIA GPU: Orin Nano 8 GB
CUDA Version: 12.2
CUDNN Version: 8.9.4
Operating System: Jetpack 6.0
Python Version: 3.10
PyTorch Version: 2.3.0