Open frandoLin opened 2 years ago
Does anyone come across the same problem? For the batchsize of 1, the coversion is ok. Does that mean nanodet does not support dynamic inference by using tensorrt?
Whatever I try, the batchsize for the onnx model is fixed which is one.
You need to export pytorch model to onnx model with dynamic_axes parameters
Does anyone come across the same problem? For the batchsize of 1, the coversion is ok. Does that mean nanodet does not support dynamic inference by using tensorrt?