Open tasyoooo opened 1 month ago
Hello there! 👋
Great to hear you're leveraging YOLOv5 with TensorRT for improved performance on your RTX3050 GPU! Using TensorRT, you can significantly speed up inference time by optimizing neural network models.
Here's a general overview of the steps involved:
export.py
script in the YOLOv5 repository.python export.py --weights yolov5s.pt --img 640 --batch 1 --device 0 --opset 12 --include onnx
trtexec
command or TensorRT Python API to convert the ONNX model to a TensorRT engine optimized for your GPU.trtexec --onnx=yolov5s.onnx --saveEngine=yolov5s.engine
While the above steps provide a high-level overview, specific implementation details can vary. For further guidance, checking documentation and examples specific to TensorRT and YOLOv5 is recommended. Feel free to explore our official documentation for more insights: https://docs.ultralytics.com/yolov5/
Wishing you success in your project! If you have any more questions, feel free to ask. 🚀
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐
Search before asking
Question
I'm using local machine with gpu rtx3050. I would like to utilize my gpu during the detection process. I am using webcam as a soure and framework tensor rt
Additional
No response