Open Bananaspirit opened 10 months ago
[W] [TRT] Calibrator won't be used in explicit precision mode. Use quantization aware training to generate network with Quantize/Dequantize nodes.
I have used this command
trtexec --onnx=./yolo_pt_model.onnx --int8 --saveEngine=./res.trt
💡 Your Question
I tuned the quantization weights during training and got an onnx model with Q/DQ layers as output. However, when I use TensorRt to convert a file to an engine with int8 precision, I get the following message:
Calibrator won't be used in explicit precision mode. Use quantization aware training to generate network with Quantize/Dequantize nodes
The command with which I started quantization:
Questions:
Versions
No response