TensorRT8.Support Yolov5n,s,m,l,x .darknet -> tensorrt. Yolov4 Yolov3 use raw darknet *.weights and *.cfg fils. If the wrapper is useful to you,please Star it.
MIT License
1.19k
stars
316
forks
source link
It's useless to inference when batchsize is not 1 #99
It's strange when I set batchsize more than 1,such as 7,the inference time Increased seven times,It seems to be executed serially,the model is yolov3