laugh12321 / TensorRT-YOLO

🚀 你的YOLO部署神器。TensorRT Plugin、CUDA Kernel、CUDA Graphs三管齐下,享受闪电般的推理速度。| Your YOLO Deployment Powerhouse. With the synergy of TensorRT Plugins, CUDA Kernels, and CUDA Graphs, experience lightning-fast inference speeds.
https://github.com/laugh12321/TensorRT-YOLO
GNU General Public License v3.0
540 stars 67 forks source link

[Help]: how to export yolov10 to onnx? #28

Closed zhongqiu1245 closed 4 months ago

zhongqiu1245 commented 4 months ago

cmd

trtyolo export -w yolov10m.pt -v yolov10 -o models

however:

[I] Starting export with Pytorch.
[E] YOLO version 'yolov10' not supported

Could you tell me what is the right cmd to export yolov10 to onnx? Thx!

laugh12321 commented 4 months ago

@zhongqiu1245 The export method is detailed in commit 62071cb. Here's a step-by-step explanation:

  1. Download the fork laugh12321/yolov10/tree/nms to your local machine and set up the environment as per the instructions.
  2. Refer to the THU-MIG/yolov10#29-2316587365 method for export.

YOLOv10 is NMS-free, and adding TRT EfficientNMS here is to facilitate project usage. Practical tests show that the processing speed with EfficientNMS is similar to v10postprocess. For specific benchmarks, please refer to THU-MIG/yolov10#29-2131005872.

zhongqiu1245 commented 4 months ago

Thank you!