marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.39k stars 344 forks source link

gst-stream-error-quark: NvDsBatchMeta not found for input buffer #449

Open colinleng opened 10 months ago

colinleng commented 10 months ago

When I use your project in the DeepStream Python SDK, it works fine for image recognition when using the nvstreammux component. However, I'm using a single video source and not using the nvstreammux component. When I try to perform recognition this way, I encounter the following error: 0:00:08.984108832 29549 0x7f01ff4850 WARN nvinfer gstnvinfer.cpp:1428:gst_nvinfer_process_full_frame: error: NvDsBatchMeta not found for input buffer. Error: gst-stream-error-quark: NvDsBatchMeta not found for input buffer. (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1428): gst_nvinfer_process_full_frame (): /GstPipeline:pipeline0/GstNvInfer:daytimelamp_inference can you help me?

marcoslucianops commented 10 months ago

To run the inference (nvinfer) and others NVIDIA plugins, you need the nvstreammux.

colinleng commented 10 months ago

can i use tee before nvstreammux. like tee link nvstreammux?

marcoslucianops commented 10 months ago

Yes

colinleng commented 10 months ago

Because if I use the nvstreammux component, then I need to use the nvmultistreamtiler component when saving video frames; otherwise, the saved video frames will be black. When using the nvmultistreamtiler component, the frame numbers I obtain from frame_meta are all 0. Therefore, I want to split the video using the tee component, and then one group will use the nvstreammux component for recognition, while the other group will not use the nvstreammux component for video recording. so this way is ok ?

marcoslucianops commented 10 months ago

The nvstreammux is to optimize the inference of multiple cameras putting the frames/buffers in batches, but it's need even using only one stream. If you are doing inference in only one strteam, you don't need the nvmultistreamtiler. If you use the tee before the nvstreammux for the output elements, you will not be able to get the video output with the bboxes (from the nvdsosd).