NVIDIA-AI-IOT / yolo_deepstream

yolo model qat and deploy with deepstream&tensorrt
Apache License 2.0
533 stars 135 forks source link

Yolov7-QAT: Different Graph exported in PTQ int8 compare with the guide #42

Open Jackforward opened 1 year ago

Jackforward commented 1 year ago

I downloaded the yolov7 onnx file according to https://github.com/NVIDIA-AI-IOT/yolo_deepstream, and then convert the onnx file into tensorrt int8 engine file in ptq mode, the platform in drive AGX Orin iGPU, however, the graph is different with the guidiance show in https://github.com/NVIDIA-AI-IOT/yolo_deepstream/blob/main/yolov7_qat/doc/Guidance_of_QAT_performance_optimization.md

  1. platform: drive agx orin
  2. tensorrt: 8.4.11
wanghr323 commented 1 year ago

would you mind show your reproduce steps? and attach the logs, exported graph files here?