NVIDIA-AI-IOT / yolo_deepstream

yolo model qat and deploy with deepstream&tensorrt
Apache License 2.0
534 stars 135 forks source link

Why in the config files it is recommended batch size-size=1? #9

Open vilmara opened 3 years ago

vilmara commented 3 years ago

Hi all, I need to deploy the model with dynamic batching on DS-Triton, but the YOLOV4 example in DeepStream says Following properties are always recommended: # batch-size(Default=1)

I run a test with yolov3 comparing BS=8 vs BS=1 and it exposed poor performance 0.24X , running TensorRT engine with DeepStream 5.1:

Throughput FPS (avg) | INT8

BS =1 → PERF: 246.29 (245.98) BS =8 → PERF: 60.31 (60.63)

What do you recommend to work with BS>1?