Closed flmello closed 4 months ago
Hi, it looks like you need to need remove comment from model-engine--file line.
Hi, it looks like you need to need remove comment from model-engine--file line.
Model-engine-file overides the definiton of onnx-file. When you remove comment, you are ignoring the .onnx and using the engine file that already exits. When model-engine-file is not provided, a new model file will be compiled based on the onnx-file you set.
But, since my model_b1_gpu0_fp32.engine is the one created with deepstream-app -c ./config/deepstream_app_config.txt
with the same config file, I removed the comment, but the error keeps the same.
It seems it was a Typo, there was a missing "t" in 'NvDsInferYoloCudaEngineGe'.
The correct is NvDsInferYoloCudaEngineGet
I am trying to use the Deepstream-Yolo setup into my .py script, but I get an error complaining that 'NvDsInferYoloCudaEngineGe' was not found.
In order to be sure Yolov5 is running with Deepstream, I made all setups from this repository instructions. Then I successfully managed to run
deepstream-app -c ./config/deepstream_app_config.txt
. It opens my RTSP stream, does the inference, draw boxes and so on.Then, I took an working example of python script (see attachment test4.py.txt ) and changed the primary GPU inference engine pgie to the one that was working and being used by my
deepstream_app_config.txt
. I expected that replacing the pgie config file would be enough, but it is not being able to find NvDsInferYoloCudaEngineGe, although it is there. This is the pgie config file dstest4_pgie_nvinfer_yolov5_config.txt.And this is the complete log of the error: