enazoe / yolo-tensorrt

TensorRT8.Support Yolov5n,s,m,l,x .darknet -> tensorrt. Yolov4 Yolov3 use raw darknet *.weights and *.cfg fils. If the wrapper is useful to you,please Star it.
MIT License
1.19k stars 316 forks source link

It's useless to inference when batchsize is not 1 #99

Open Egozjuer opened 3 years ago

Egozjuer commented 3 years ago

It's strange when I set batchsize more than 1,such as 7,the inference time Increased seven times,It seems to be executed serially,the model is yolov3

enazoe commented 3 years ago

bacause the gpu compute capabilities is already saturated when the batch is 1