Closed jvd-monteiro closed 1 week ago
So I ran successfully some new tests with the .engine I had already generated and the libnvdsinfer_custom_impl_Yolo.so compiled file. I believe that's the path, right? Any further comments are still welcome.
Hi,
Move the config_infer_primary_yoloV8.txt
, labels.txt
and nvdsinfer_custom_impl_Yolo
to the apps/deepstream-imagedata-multistream
folder, and change the pgie.set_property('config-file-path'
from dstest_imagedata_config.txt
to config_infer_primary_yoloV8.txt
on the deepstream_imagedata-multistream.py
file.
Hello!
I've successfully ran my yolov8 model on Deepstream according to the recommendations of this repository. Nevertheless I'm now having trouble to understand how to extract metadata (https://github.com/marcoslucianops/DeepStream-Yolo?tab=readme-ov-file#extract-metadata) the way I want to...
I've installed the Python bindings and was able to run the examples from https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/ but still couldn't simply change the model to run inference using my yolov8 model instead.
I wanted to try running my yolov8 model specifically on this example (https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/tree/master/apps/deepstream-imagedata-multistream). My main doubts regard the infos on the config file like: uff-input-blob-name, output-blob-name - I understand they wont be used and i need to implement the custom nvdsinfer_custom_impl_Yolo function from this repository, is that right?
Could anyone please share some further guidance or tips on how to? Thanks in advance!