marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.44k stars 354 forks source link

Extracting Metadata #564

Closed jvd-monteiro closed 1 week ago

jvd-monteiro commented 2 weeks ago

Hello!

I've successfully ran my yolov8 model on Deepstream according to the recommendations of this repository. Nevertheless I'm now having trouble to understand how to extract metadata (https://github.com/marcoslucianops/DeepStream-Yolo?tab=readme-ov-file#extract-metadata) the way I want to...

I've installed the Python bindings and was able to run the examples from https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/ but still couldn't simply change the model to run inference using my yolov8 model instead.

I wanted to try running my yolov8 model specifically on this example (https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/tree/master/apps/deepstream-imagedata-multistream). My main doubts regard the infos on the config file like: uff-input-blob-name, output-blob-name - I understand they wont be used and i need to implement the custom nvdsinfer_custom_impl_Yolo function from this repository, is that right?

Could anyone please share some further guidance or tips on how to? Thanks in advance!

jvd-monteiro commented 2 weeks ago

So I ran successfully some new tests with the .engine I had already generated and the libnvdsinfer_custom_impl_Yolo.so compiled file. I believe that's the path, right? Any further comments are still welcome.

marcoslucianops commented 2 weeks ago

Hi,

Move the config_infer_primary_yoloV8.txt, labels.txt and nvdsinfer_custom_impl_Yolo to the apps/deepstream-imagedata-multistream folder, and change the pgie.set_property('config-file-path' from dstest_imagedata_config.txt to config_infer_primary_yoloV8.txt on the deepstream_imagedata-multistream.py file.