isarsoft / yolov4-triton-tensorrt

This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server
http://www.isarsoft.com
Other
276 stars 63 forks source link

Unexpected inference output 'detections' for model 'yolov4' #56

Closed AlgirdasKartavicius closed 2 years ago

AlgirdasKartavicius commented 2 years ago

After model conversion to TensorRT I get this auto generated config file:

platform: "tensorrt_plan" max_batch_size: 1 input { name: "input" data_type: TYPE_FP32 dims: 3 dims: 608 dims: 608 } output { name: "boxes" data_type: TYPE_FP32 dims: 10647 dims: 1 dims: 4 } output { name: "confs" data_type: TYPE_FP32 dims: 10647 dims: 13 }

Expected config file is like this:

name: "yolov4" platform: "tensorrt_plan" max_batch_size: 1 input [ { name: "input" data_type: TYPE_FP32 format: FORMAT_NCHW dims: [ 3, 608, 608 ] } ] output [ { name: "detections" data_type: TYPE_FP32 dims: [159201, 1, 1] } ]

How can I convert my output to expected format?

AlgirdasKartavicius commented 2 years ago

I changed GRCP client to call for two outputs. It works now. Closing this issue.