Open Fschoeller opened 3 years ago
What TRT version are you using? Are you able to provide the models you are benchmarking?
I'm using TensorRT 7.1.3 on the Jetson Xavier AGX. Would you like the models as ONNX files?
Yes, proving the models in ONNX form will be useful.
Are you seeing the same performance difference with the latest version of TRT?
I have two yolov5 models of different sizes. One has 35.9m parameters, the other 12.7m. When I convert the models to TensorRT with
trtexec --onnx=model.onnx --batch=5 --fp16
the resulting models have roughly the same inference speed (21 fps) even though the speed should be vastly different. What am I doing wrong?