marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.39k stars 344 forks source link

Issue with the implicit batch-size #412

Open Samjith888 opened 11 months ago

Samjith888 commented 11 months ago

I have created the ONNX model file with implicit batch-size

python3 export_yoloV8.py -w yolov8s.pt --simplify

Whilch will use the default batch-size=1. And this ONXX file is used to create the engine file and ran the deepstream app with 1 stream. Whenever i add more streams, then i have create the ONXX file with specific batch size (batch-size=no of streams)

ONNX model generation command for two streams python3 export_yoloV8.py -w yolov8s.pt --simplify --batch 2

Dynamic batch size is not supported with Deepstream .6.0. Please add support for this

marcoslucianops commented 11 months ago

What is the error you get in the deepstream-app when you export the ONNX model with the --dynamic arg?