triple-Mu / YOLOv8-TensorRT

YOLOv8 using TensorRT accelerate !
MIT License
1.29k stars 223 forks source link

deepstream python app on jetson #86

Closed sergii-matiukha closed 1 year ago

sergii-matiukha commented 1 year ago

How to make INT8 engine for yolov8 model with imgsz=3040, for use in deepstream python app on jetson? And how much does the accuracy drop when switching from Fp32 to INT8?

triple-Mu commented 1 year ago

How to make INT8 engine for yolov8 model with imgsz=3040, for use in deepstream python app on jetson? And how much does the accuracy drop when switching from Fp32 to INT8?

I haven't test int8 inference. This repo provide a deepstream c++ demo in csrc/deepstream.

triple-Mu commented 1 year ago

closed, feel free to reopen it if you have further questions. Thanks!