visionhong / visionhong.github.io

MIT License
1 stars 0 forks source link

tools/YOLOv8-with-TensorRT-Nvidia-Triton-Server/ #4

Open utterances-bot opened 1 month ago

utterances-bot commented 1 month ago

YOLOv8 with TensorRT & Nvidia Triton Server | VISION HONG

Intro

https://visionhong.github.io/tools/YOLOv8-with-TensorRT-Nvidia-Triton-Server/

tofulim commented 1 month ago

좋은 글 감사합니다~

tofulim commented 1 month ago

그리고 ultralytics에서 지원해줘서요. 그냥 triton inference server url을 model명으로 넣어줘도 동작합니다.

from ultralytics import YOLO

# Load the Triton Server model
model = YOLO("http://localhost:8000/yolo", task="detect")

# Run inference on the server
results = model("path/to/image.jpg")