isarsoft / yolov4-triton-tensorrt

This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server
http://www.isarsoft.com
Other
276 stars 63 forks source link

tritonclient InferenceServerException #47

Closed PhanDuc closed 3 years ago

PhanDuc commented 3 years ago

Hi @philipp-schmidt, I successfully convert model Yolov5 to an engine model using this repo: https://github.com/wang-xinyu/tensorrtx/tree/master/yolov5

And following your guide on how to run triton-inference-client, I got this error: tritonclient.utils.InferenceServerException: [StatusCode.INVALID_ARGUMENT] unexpected inference output 'detections' for model 'yolov5'

your function grpcclient.InferRequestedOutput took an argument detection, but for yolov5, it needs a different argument.

how I can find the correct argument for function grpcclient.InferRequestedOutput ?

Thank you so much!

philipp-schmidt commented 3 years ago

You need to find the name of the output of the network. This is set when the network is being created. Check your triton server log when it starts. It usually gives information about the models.

PhanDuc commented 3 years ago

Thank @philipp-schmidt, I should pass prob to get the result from triton-server. Thank you!