NVIDIA-AI-IOT / yolo_deepstream

yolo model qat and deploy with deepstream&tensorrt
Apache License 2.0
533 stars 135 forks source link

Cloud Message (Kafka) Configurations for YOLO models #57

Open tunahanertekin opened 4 months ago

tunahanertekin commented 4 months ago

Hi,

I have managed to run deepstream-test5 and YOLOV7 models with DeepStream seperately. In deepstream-test5, I configured a sink component that sinks the objects to the Kafka and it works, but the same configuration didn't work with the YOLOV7 model. Here is the sink component that I use:

[sink1]
enable=1
type=6
msg-conv-config=dstest5_msgconv_sample_config.txt
msg-conv-payload-type=1
msg-conv-msg2p-new-api=0
msg-conv-frame-interval=30
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
msg-broker-conn-str=172.16.44.101;32438;my-topic
topic=my-topic
msg-broker-config=/opt/nvidia/deepstream/deepstream/sources/libs/kafka_protocol_adaptor/cfg_kafka.txt

My aim is to generate Kafka messages when the YOLOV7 model detects a specific object. What are the requirements for this purpose? Should I write a custom code using SDK to achieve this, or is it configurable via pipeline/inference configurations?

I am not sure if it's the right repository for this issue but any kind of help is appreciated! Thanks.