levipereira / deepstream-yolov9

Implementation of Nvidia DeepStream 7 with YOLOv9 Models.
Apache License 2.0
11 stars 1 forks source link

TensorRT Plugin #1

Open johnnynunez opened 4 months ago

johnnynunez commented 4 months ago

I've see your https://github.com/NVIDIA/TensorRT/pull/3859 Is it possible to have on trt10? I'm working on jetson agx Orin and now is compatible with cuda 12.5, cudnn 9.1.1 and tensorrt 10.0.1.6. Also, is it compatible with yolov8?

levipereira commented 4 months ago

Yes, it can be easily implemented on TRT 10 and for any version of YOLO since v4, because it's same implementation as End2End Efficient NMS but add a new layer det_indices. I will try to find some free time and implement it on 8.5 and 10.0

levipereira commented 4 months ago

https://github.com/NVIDIA/TensorRT/pull/3859#issuecomment-2125899520

levipereira commented 4 months ago

@johnnynunez Check this out. https://github.com/levipereira/ultralytics -- Added Support for TRT Plugin YoloNMS on Yolov8 for Instance Segmentation and Object Detection

I have tested/validated on deepstream with yolov8n -- https://github.com/levipereira/deepstream-yolov9

 from ultralytics import YOLO
 # model = YOLO("yolov8n-seg.pt") 
 model = YOLO("yolov8n.pt") 
 model.export(format="onnx_trt")
johnnynunez commented 4 months ago

@levipereira awesome! but maybe still I have to do the predict compatible. These guys did it: https://github.com/nkb-tech/ultralytics

@johnnynunez Check this out. https://github.com/levipereira/ultralytics -- Added Support for TRT Plugin YoloNMS on Yolov8 for Instance Segmentation and Object Detection

I have tested/validated on deepstream with yolov8n -- https://github.com/levipereira/deepstream-yolov9

from ultralytics import YOLO
# model = YOLO("yolov8n-seg.pt") 
model = YOLO("yolov8n.pt") 
model.export(format="onnx_trt")
johnnynunez commented 4 months ago

@levipereira also can you create a PR to ultralytics?

levipereira commented 4 months ago

@johnnynunez

With Triton Server and Triton Client, we can easily perform inference and evaluation on any YOLO Series model. Check out the evaluation results of YOLOv8 models using YOLO_NMS_TRT at the link below:

YOLOv8 Evaluation Results

Implementing inference using the TensorRT API and Custom Plugin within the Ultralytics project involves a significant amount of work. I may consider implementing it in the future.

Using Triton Server, we can build and test any model without additional effort.

For more information, visit:

levipereira commented 4 months ago

@levipereira also can you create a PR to ultralytics?

Will implement end2end with EfficientNMS or YOLO_NMS_TRT and open a PR.

johnnynunez commented 4 months ago

@levipereira do you have lower mAP with efficient_nms in COCO eval?

levipereira commented 3 months ago

@levipereira do you have lower mAP with efficient_nms in COCO eval?

No, I did not get a lower mAP. The results were consistent with the baseline evaluation.

levipereira commented 3 months ago

@johnnynunez https://github.com/levipereira/triton-server-yolo?tab=readme-ov-file#evaluation-test-on-tensorrt I got the same result, even with FP16.