marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.45k stars 356 forks source link

Dynamic batch size for Yolov4 #487

Open aidevmin opened 10 months ago

aidevmin commented 10 months ago

Until now, code to convert from darket Yolov4 weight to engine does not support dynamic batch size. I tried with dynamic batch size to measure inference time by using trtexec, but it still set input size 1x3x508x608, it is not correct. Please add this part for flexible usage. Thanks.

marcoslucianops commented 10 months ago

It creates the engine with dynamic batch-size. But you need to set the batch-size in the config infer file to the OPT/MAX batch-size value to create the correct engine.

aidevmin commented 10 months ago

@marcoslucianops Thanks. Do you mean change this line? https://github.com/marcoslucianops/DeepStream-Yolo/blob/9bda315ee0834ca0fb2d7f6b5f34c0a69ddc24e0/config_infer_primary.txt#L10 I will try it and inform to you.

marcoslucianops commented 10 months ago

Yes, it needs to be the OPT/MAX batch-size.