NVIDIA-AI-IOT / yolo_deepstream

yolo model qat and deploy with deepstream&tensorrt
Apache License 2.0
533 stars 135 forks source link

Deepstream-app with YOLOv4 ONNX model + BatchedNMSPlugin #14

Closed frenky-strasak closed 3 years ago

frenky-strasak commented 3 years ago

Hi, I successfully created engine file from yolov4 + BatchedNMSPlugin according these instructions in this repository. The engine file works fine (I can successfully run ../bin/yolov4 --demo command ).

Now I want to deploy this engine file to deepstream-app. To do that, I need a parse function for config_infer_primary.txt file. The default parse function for yolov4 is here in this repository including the NMS. But I do not need NMS in the post processing time because the BatchedNMSPlugin is part of the engine file so the NMS is already done and the output should contain only the final bboxes.

I tried to rewrite the parse function for yolov4 for my case, but with no success.

Is there any example of parse function for yolov4 + BatchedNMSPlugin? If not, where should I start? Is there any information how to write own parse function for engine files?

frenky-strasak commented 3 years ago

Ok, I found example in: /opt/nvidia/deepstream/deepstream-5.1/sources/libs/nvdsinfer_customparser