Closed berkantay closed 3 years ago
@berkantay can you help clarify a few things?
trtexec
- does it give similar numbers?input_data
you're passing in?Hello @berkantay, I face the same issue that the model after being exported to onnx for use in tensorrt requires numpy as input, not a tensor. I get the input to my model from a previous output of another layer as tensor, which I have to transfer from gpu to cpu and cast to numpy for the engine.run(inpu_data). Moving from gpu to cpu takes 40 ms, it's too slow... Did you solve your problem, may I ask how?
I face the same problem. the input is a tensor from another layer. How to shape it without detaching to CPU?
Hello @berkantay, I face the same issue that the model after being exported to onnx for use in tensorrt requires numpy as input, not a tensor. I get the input to my model from a previous output of another layer as tensor, which I have to transfer from gpu to cpu and cast to numpy for the engine.run(inpu_data). Moving from gpu to cpu takes 40 ms, it's too slow... Did you solve your problem, may I ask how?
I did not solve it actually, used recommended tensorrt repository on official yolov5 github page.
Hello, I have trained my custom object detection weights on yolov5 then I got export in
.onnx
format. Now I want to use TensorRT engine for speed but when I take my model and feed it to the engine it gets numpy as input. This means that the frames are stored in CPU. How should I pass my images to the engine in torch.Tensor format.Best Regards.
Before TensorRT 12.5 FPS
After TensorRT 6 FPS