I did run the Mxnet to ONNX script and then running predictions with TensorRT.
My work deals with video processing so i couting the processing time for each frame.
Without ONNX, running pure MXNET, got 7~9ms per frame. Predicting with TensorRT i got same result!
Running in a LAPTOP with GTX1060, TensorRT 5.1.5 and CUDA 10.1 (same as reported in this repo)
Hey Guys!
I did run the Mxnet to ONNX script and then running predictions with TensorRT.
My work deals with video processing so i couting the processing time for each frame. Without ONNX, running pure MXNET, got 7~9ms per frame. Predicting with TensorRT i got same result!
Running in a LAPTOP with GTX1060, TensorRT 5.1.5 and CUDA 10.1 (same as reported in this repo)
Did someone experience something like that?