Closed caikw0602 closed 3 years ago
You can adapt yolov3-spp.cpp, read video using opencv and do inference.
Hi, @wang-xinyu
I am pretty much new to tensorrt and my question could be silly.
I followed your steps and successfully made yolov5.engine and yolov5 binary file and it works.
Could you tell us how I can load the tensorrt model and do an inference by using Python3? thank you so much.
@jubrowon the .engine file may be loaded by using tensorrt python API. But currently this repo not using python, only c++. You can try it yourself.
@wang-xinyu thanks for your quick reply!
Hi @jubrowon
You can use Triton Inference Server to deploy your model and use their Python SDK with many examples to write code which runs your model distributed. They have shared memory support, so there should be no overhead by using the model on a server and you get multi GPU support and many other benefits.
If you already have an engine file, the steps should be very similar to my repo tutorial only with your engine file: https://github.com/isarsoft/yolov4-triton-tensorrt
Hope it helps.
Hi @philipp-schmidt ,
Thank you so much for your help and I will check it out. Actually I would like to deploy my customized yolov5s tensorrt model on Jetson nano. If you have any information about it please let me know :)
thanks,
I will be adding a python client script which loads the engine and does inference very soon. And it should not be hard to support yolov5 as well, as they are very similar.
@philipp-schmidt Thank you so much for your help!
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Thank you for your amazing work! I have produced a "yolov3-spp.engine" file to do object deteciton of images successfully. Now how can i use it to detect persons from camera signals in real time? Please give me some help. Thank you very much.