marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.38k stars 343 forks source link

Ultralytics inference vs deepstream same model not yield same results #531

Open mgabell opened 2 months ago

mgabell commented 2 months ago

Hi,

I have a YoloV8 model that I have converted into onnx and run on Jetson AGX Orin using deepstream and your lib. I get a completely different result if I run the same source with the same model with ultralytics or if I use gstream deepstream.

Why can this be? The onnx conversion perhaps. But if so, can I run it without conversion using cfg and weights/wts files as https://wiki.seeedstudio.com/YOLOv8-DeepStream-TRT-Jetson/

NOTE: You can use your custom model, but it is important to keep the YOLO model reference (yolov8_) in your cfg and weights/wts filenames to generate the engine correctly.

    Step 5. Generate the cfg, wts and labels.txt (if available) files (example for YOLOv8s)

python3 gen_wts_yoloV8.py -w yolov8s.pt

The number of objects found are MUCH less than with the ultralytics approach.

This is how I run ultralytics: yolo predict model=/mnt/M2Disk/Assets/YoloV8_Model/weights/best.pt source='/mnt/M2Disk/Assets/TestRun/jpg/Images' imgsz=640 save_txt=true save=false save_conf=true

For deepstream I combine deepstream-test1 and deepstream-test3 python_app from nVidia to allow multiple video sources, but I only run one for evaluation.