enazoe / yolo-tensorrt

TensorRT8.Support Yolov5n,s,m,l,x .darknet -> tensorrt. Yolov4 Yolov3 use raw darknet *.weights and *.cfg fils. If the wrapper is useful to you,please Star it.
MIT License
1.19k stars 316 forks source link

How to use int8 interference ? #120

Closed lchop closed 3 years ago

lchop commented 3 years ago

Hello everyone and thank you for your work,

I am trying to use the INT8 interference program but I can't find a proper tutorial how to use it. Can someone please explain to me how to make it work ? I have understand that you need calibration data and a calibration txt file but I don't know how to optain this files. Thank you for you help.

enazoe commented 3 years ago

make the config like this.

Config config_v4;   
config_v4.net_type = YOLOV4;    
config_v4.file_model_cfg = "../configs/yolov4.cfg"; 
config_v4.file_model_weights = "../configs/yolov4.weights"; 
config_v4.calibration_image_list_file_txt = "../configs/cibration_alimages.txt";    
config_v4.inference_precison =INT8; 
config_v4.detect_thresh = 0.5;

if program can not find the file, please use the absolute path, and put the calibrate image path to the cibration_alimages.txt.

lchop commented 3 years ago

Ok yes thanks It is working now !

For those that don't know how to use int8 like me:

1) You have to create a directory that countains a data set of images (500 at least). I have extracted mine from coco dataset. 2) You have to create to *.txt file that list all the image path (absolute) 3) Like @enazoe wrote above you have to modify the config.

Hope this helps !

Lenan22 commented 1 year ago

Hello everyone and thank you for your work,

I am trying to use the INT8 interference program but I can't find a proper tutorial how to use it. Can someone please explain to me how to make it work ? I have understand that you need calibration data and a calibration txt file but I don't know how to optain this files. Thank you for you help.

Please refer to our open source quantization tool ppq, the quantization effect is better than the calibration of tensorrt, if you encounter issues, we can help you solve them. https://github.com/openppl-public/ppq/blob/master/md_doc/deploy_trt_by_OnnxParser.md