Closed Egorundel closed 1 month ago
trtexec
can load calibration_data.cache by --calib=<file>
, if you want to trtexec generate it, you can modify the source to support this feature.
@lix19937 trtexec
can load calibration_data.cache
, but he doesn't know how to generate it yet, right?
It not support read data to calibration, need user to develop this feature.
trtexec can load calibration_data.cache, but if you want to get calibration_data.cache, you should develop or modify the oss of trtexec.
@lix19937 I started writing calibration code in C++.
I have a question: And what does it even need to be calibrated?
1. PyTorch model after training? — model.pt 2. The ONNX model? — model.onnx 3. Or is it the TensorRT Engine itself? — model.(trt/engine)?
For 1,2 If you do ptq, use onnx file.
For 3 onnx to plan(engine) through trt calibration.
For 3 onnx to plan(engine) through trt calibration.
This is only possible if I already have the calibration_data.cache
calibration file, right?
If I do not have a calibration file, then do I need to create it by calibrating the ONNX model?
@lix19937 Can you help me solve the problem in my efforts? I will be very grateful to you.
Most likely, the calibration program was written incorrectly, You can ref https://github.com/lix19937/trt-samples-for-hackathon-cn/tree/master/cookbook/03-BuildEngineByTensorRTAPI/MNISTExample-pyTorch/C%2B%2B
@Egorundel , fyi polygraphy support dump the calibration cache using run --calibration-cache <file_path>
https://github.com/NVIDIA/TensorRT/tree/release/10.2/tools/Polygraphy#command-line-toolkit
@lix19937 Thanks for your help!!!
Can you explain what Npy
and Npz
arrays do?
Can I replace them with just reading images from image paths that are written in a txt file?
save calib data to numpy format file.
So you modify https://github.com/lix19937/trt-samples-for-hackathon-cn/blob/master/cookbook/03-BuildEngineByTensorRTAPI/MNISTExample-pyTorch/C%2B%2B/createCalibrationAndInferenceData.py#L25-L27 , use your data path, to create npz data.
cnpy is just read npz data. https://github.com/lix19937/trt-samples-for-hackathon-cn/blob/master/cookbook/03-BuildEngineByTensorRTAPI/MNISTExample-pyTorch/C%2B%2B/cnpy.cpp
@lix19937 I reworked my code (C++) and now it works correctly. I used nvinfer1::IInt8EntropyCalibrator2
.
https://github.com/Egorundel/int8_calibrator_cpp
You can take it and use it, and also integrate my solution into any of yours in C++.
Good.
Description
Hello!
Is there any way to use
trtexec
to create acalibration_data.cache
calibration file and create an engine? For example, somehow submit a folder with images to thetrtexec
command.Environment
TensorRT Version: 8.6.1
NVIDIA GPU: RTX3060
NVIDIA Driver Version: 555
CUDA Version: 11.1
CUDNN Version: 8.0.6
Operating System:
Python Version (if applicable): 3.8