marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.4k stars 345 forks source link

Detections not correctly parsed after exporting to ONNX #362

Open SIGLOST opened 1 year ago

SIGLOST commented 1 year ago

Hi Marcos,

Thank you for the great work!

I've been testing the last iteration and apparently there's a mismatch between the export scripts and the bounding box parser.

Followed the instructions and used the Ultralytics container with the latest available export script (defaulting to ONNX opset 17):

root@a42479e971ed:/usr/src/app# python3 export_yoloV5.py -w yolov5s.pt --simplify --dynamic

Starting: yolov5s.pt
Opening YOLOv5 model

YOLOv5 🚀 v7.0-175-g5f11555 Python-3.10.9 torch-2.0.0 CPU

Fusing layers... 
YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients

Creating labels.txt file

Exporting the model to ONNX
================ Diagnostic Run torch.onnx.export version 2.0.0 ================
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================

Simplifying the ONNX model
Done: yolov5s.onnx

This is what I get when deploying the resulting model on a dGPU with DeepStream 6.1.1 container (NMS clustering is enabled): yolov5s_issue

Tried other YOLO variants like yoloV8 and yolonas, and in such cases we get the correct bounding boxes positions but always the same object class person (which is the 1st occurrence of the 80 classes present in the labels.txt of COCO dataset): yolo8_yolonas_issue

marcoslucianops commented 1 year ago

I'm not sure, but maybe the problem is in PyTorch >= 2. I will test today.

marcoslucianops commented 1 year ago

I can reproduce this issue in the DeepStream 6.1.1 using docker.

marcoslucianops commented 1 year ago

Try again with new updated files I just uploaded to the repo. Export the ONNX model with the new export file, generate the TensorRT engine again with the updated files, and use the new config_infer_primary file according to the model.

M1ke002 commented 1 year ago

Try again with new updated files I just uploaded to the repo. Export the ONNX model with the new export file, generate the TensorRT engine again with the updated files, and use the new config_infer_primary file according to the model.

thanks, it's working now

SIGLOST commented 1 year ago

@marcoslucianops I've tested successfully export to ONNX and deploying with DS 6.1.1 (dGPU) container the following variants: yolov5 / yolov8 / yolonas

Thanks a lot!