Closed Egorundel closed 8 months ago
Why does an error occur when the onnx file generated by export.py is converted to trt_engine, and https://github.com/WongKinYiu/yolov9/issues/79#issue-2153547004 is normal.
Why does an error occur when the onnx file generated by export.py is converted to trt_engine, and #79 (comment) is normal.
What tensorrt's version you are using?
Why does an error occur when the onnx file generated by export.py is converted to trt_engine, and #79 (comment) is normal.
What tensorrt's version you are using?
tensorrt==7.2.1.6
Hello everyone!
I would like to introduce my open-source project - TensoRT-YOLO, a tool for deploying YOLO Series (Support YOLOv9) with Efficient NMS in TensorRT.
Perfomance Test using GPU RTX 2080Ti 22GB on AMD Ryzen 7 5700X 8-Core/ 128GB RAM.
Model Performance Evaluation using TensorRT engine using TensoRT-YOLO.
All models were deployed using FP16, BatchSize 4 and size 640.
This includes the YOLOv9-C, YOLOv9-E, YOLOv9-C-Converted, YOLOv9-E-Converted, GELAN-C and GELAN-E.
YOLOv9-C | YOLOv9-E | YOLOv9-C-Converted | YOLOv9-E-Converted | GELAN-C | GELAN-E |
---|---|---|---|---|---|
Average Latency: 36.615ms |
Average Latency: 59.736ms |
Average Latency: 19.689ms |
Average Latency: 53.144ms |
Average Latency: 19.557ms |
Average Latency: 53.575ms |
This includes the YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l and YOLOv8x.
YOLOv8n | YOLOv8s | YOLOv8m | YOLOv8l | YOLOv8x |
---|---|---|---|---|
Average Latency: 10.289ms |
Average Latency: 12.459ms |
Average Latency: 18.514ms |
Average Latency: 24.926ms |
Average Latency: 34.587ms |
Does the export to ONNX work with the
NMS
module? And dynamicbatch size
. If so, how do I do it?As I understand it, the NMS module only works for TF models?