isarsoft / yolov4-triton-tensorrt

This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server
http://www.isarsoft.com
Other
276 stars 63 forks source link

customized yolov4 to tensorrt conversion problem #46

Closed dhirajpatnaik16297 closed 2 years ago

dhirajpatnaik16297 commented 3 years ago

Hi

I have a customized yolov4 darknet model of 36 classes and i followed https://github.com/Tianxiaomo/pytorch-YOLOv4 to create the engine file by following this conversion : .weights-> .onnx-> .trt-engine. My concern is that, the model file you have provided to try out (the drive link) has output as:: output { name: "detections" data_type: TYPE_FP32 dims: 159201 dims: 1 dims: 1 } but when i create for my customized model it has the following as output:: output { name: "boxes" data_type: TYPE_FP32 dims: 22743 dims: 1 dims: 4 } output { name: "confs" data_type: TYPE_FP32 dims: 22743 dims: 38 }

I am finding it difficult to make use of it. Could you have a look at it and let me know a work around to have a single output node?

Thanks

dhirajpatnaik16297 commented 3 years ago

A quick update:: I also tried out following this link https://github.com/wang-xinyu/tensorrtx/tree/master/yolov4#excute but with the .wts weights when i try to create engine file it shows an issue as assertion scale_1 failed.(core dumped) So kindly let me know which way is opt. Thanks

philipp-schmidt commented 2 years ago

Hey, there were some changes in the repo from v1.3.0 on the current main branch. You have to use the converter under converter/convert.py, which will also name the layers correctly. Let me know if that works.