Open mrpositron opened 2 years ago
When I increase batch size, the inference time on TensorRT does not change. -> With onnx, TensorRT uses explicit batch, which means if you want to use dynamic batch size, in your onnx model the batch dimension must be unknown, and you need to set the optimization profile for the inputs. Before calling Forward(), you need to set the profile for enqueue.
Thanks for your reply!
But I am not using the dynamic batch size. I specify the batch size when I convert the model.
set batch size won't work for the onnx model, it's only for caffe and uff.
When I increase batch size, the inference time on TensorRT does not change. Basically if inference time on the batch with size 8 took 20ms. Inference on batch size 16 just takes 40ms. I am not sure why this is happening ...
I have converted EfficientNet backbone from TF to ONNX, and then to TensorRT. In TF I specified the batch size as follows:
Converting TensorFlow model to ONNX
Converting ONNX model to TensorRT and saving it.
Here is an inference code for TensorRT.
Everything works properly. The problem is the speed. Basically, if I increase batch size twice it will just increase inference time twice. Thus, it is not changing total inference time.
Screenshots If applicable, add screenshots to help explain your problem.
System environment (please complete the following information):