marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.38k stars 343 forks source link

NVDSINFER_CONFIG_FAILED when using 2+ video sources simultaneously #509

Closed flmello closed 4 months ago

flmello commented 4 months ago

I running deepstream-app and a pyton script correctly when I provide just 1 video source, but it crashes when 2 or more streams are provided. It seems that there is something I misunderstood at the config file. Note that, when I change the config file (currently for yolov5) and use another config file (for traficnet, for instance) the pipeline is created OK. This is the error I get:

ubuntu@ubuntu:~/edge$ python3 test4_yolov5.py -i rtsp://admin:hbyt12345@10.21.45.19:554 rtsp://admin:hbyt12345@10.21.45.19:554

(gst-plugin-scanner:31261): GStreamer-WARNING **: 10:42:13.426: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libcustom2d_preprocess.so': /usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libcustom2d_preprocess.so: undefined symbol: NvBufSurfTransformAsync

(gst-plugin-scanner:31261): GStreamer-WARNING **: 10:42:13.505: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtritonserver.so: cannot open shared object file: No such file or directory

(gst-plugin-scanner:31261): GStreamer-WARNING **: 10:42:13.515: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_preprocess.so': /usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_preprocess.so: undefined symbol: NvBufSurfTransformAsync

(gst-plugin-scanner:31261): GStreamer-WARNING **: 10:42:13.540: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.0: cannot open shared object file: No such file or directory Creating Pipeline ... Creating stream-muxer Creating stream-demuxer Creating source bin source-bin-00 ( rtsp://admin:hbyt12345@10.21.45.19:554 ) Creating uri-decode-bin Creating source bin source-bin-01 ( rtsp://admin:hbyt12345@10.21.45.19:554 ) Creating uri-decode-bin Creating primary-gpu-inference-engine PGIE batch size : 1 WARNING: Overriding infer-config batch-size 1 with number of sources 2

Creating nvtracker Creating nvtee Creating nvtiler Creating convertor 0 Creating convertor 1 Creating convertor tile Creating onscreendisplay 0 Creating onscreendisplay 1 Creating onscreendisplay tile Creating convertor_postosd 0 Creating convertor_postosd 1 Creating convertor_postosd tile Creating capsfilter 0 Creating capsfilter 1 Creating capsfilter tile Creating h264-encoder 0 Creating h264-encoder 1 Creating h264-encoder tile Creating rtp-h264-payload 0 Creating rtp-h264-payload 1 Creating rtp-h264-payload tile Creating udp-sink 0 Creating udp-sink 1 Creating udp-sink tile Adding elements to Pipeline Linking elements in the Pipeline demux source 0

demux source 1

Launched RTSP Streaming at rtsp://localhost:8554/stream0 Launched RTSP Streaming at rtsp://localhost:8554/stream1 Launched RTSP Streaming at rtsp://localhost:8554/tiled

Starting pipeline

Opening in BLOCKING MODE Opening in BLOCKING MODE Opening in BLOCKING MODE gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so gstnvtracker: Batch processing is ON gstnvtracker: Past frame output is OFF [NvMultiObjectTracker] Initialized 0:00:01.964180124 31259 0x1bc2da30 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() [UID = 1]: Trying to create engine from model files onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output. WARNING: [TRT]: Output type must be INT32 for shape outputs WARNING: [TRT]: Output type must be INT32 for shape outputs WARNING: [TRT]: Output type must be INT32 for shape outputs WARNING: [TRT]: Output type must be INT32 for shape outputs WARNING: [TRT]: Output type must be INT32 for shape outputs WARNING: [TRT]: Output type must be INT32 for shape outputs

Building the TensorRT Engine

WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead Building complete

0:01:59.968817713 31259 0x1bc2da30 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() [UID = 1]: serialize cuda engine to file: /home/ubuntu/edge/model_b2_gpu0_fp32.engine successfully INFO: [Implicit Engine Info]: layers num: 4 0 INPUT kFLOAT input 3x640x640
1 OUTPUT kFLOAT boxes 25200x4
2 OUTPUT kFLOAT scores 25200x1
3 OUTPUT kFLOAT classes 25200x1

0:01:59.997884196 31259 0x1bc2da30 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() [UID = 1]: Backend has maxBatchSize 1 whereas 2 has been requested 0:01:59.997928294 31259 0x1bc2da30 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() [UID = 1]: deserialized backend context :/home/ubuntu/edge/model_b2_gpu0_fp32.engine failed to match config params 0:02:00.032278914 31259 0x1bc2da30 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() [UID = 1]: build backend context failed 0:02:00.032343685 31259 0x1bc2da30 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() [UID = 1]: generate backend failed, check config file settings 0:02:00.032389767 31259 0x1bc2da30 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Failed to create NvDsInferContext instance 0:02:00.032411369 31259 0x1bc2da30 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Config file path: /home/ubuntu/edge/config/dstest4_pgie_nvinfer_yolov5_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED [NvMultiObjectTracker] De-initialized Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-gpu-inference-engine: Config file path: /home/ubuntu/edge/config/dstest4_pgie_nvinfer_yolov5_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED Exiting app

--- 0.011171579360961914 seconds ---

And this is the script test4_yolov5.py.txt and the config file dstest4_pgie_nvinfer_yolov5_config.txt

flmello commented 4 months ago

So, it comes that my generated model supports just 1 image into the pipeline, against trafficnet that suports more than one. I was setting batch-size with the number of video inputs, and for Yolo I can't do that, it must be set to 1. batch-size=1 at the config file turns to be the correct config.

maxgameone commented 4 months ago

I have encountered the same problem when using the Python API. May I know how to solve it specifically

flmello commented 4 months ago

It comes that my Yolo model was trained with just 1 batch, and I was setting batch-size, at config file, to 2 or more. I had to keep it to 1. By changing this, it solved my error.

marcoslucianops commented 3 months ago

You exported the model with --batch 1. Set it --dynamic or set the number of the batch-size using --batch in the exporter file.

flmello commented 2 months ago

Now I exported the model will --dynamic. I set nvstreammux "batch-size" to 2, and set the nvinfer pgie "batch-size" to 2. Just in case, I also set the config_infer_primary.txt with batch-size=2.

However, the pipeline doesn't run, it says "Backend has maxBatchSize 1 whereas 2 has been requested".

So, either the exporting script is not exporting batch sizes greater than 1, or the is something being wrongly set in the pipeline script. Do you have any suggestions?