Closed AnhPC03 closed 2 months ago
Hey @AnhPC03, sorry for late reply. Did you try with "--minShapes=images:1x3x672x672 --maxShapes=images:1x3x672x672 --optShapes=images:1x3x672x672 --shapes=images:1x3x672x672" to specific the input shape range?
closing since no activity for several months, thanks!
Hello, I could run your repo and when I printed input and output tensors size, the values were
But the converted onnx model has this values, I saw on Netron having same values
If I converted onnx model to .engine only for inferencing using GPU
This having the same tensor size with Netron. And I couldn't inference this .engine using GPU.
How can I convert only inferencing using GPU but having same tensor size with yours DLA loadable? Thank you very much.