Open colinleng opened 10 months ago
To run the inference (nvinfer
) and others NVIDIA plugins, you need the nvstreammux
.
can i use tee before nvstreammux. like tee link nvstreammux?
Yes
Because if I use the nvstreammux component, then I need to use the nvmultistreamtiler component when saving video frames; otherwise, the saved video frames will be black. When using the nvmultistreamtiler component, the frame numbers I obtain from frame_meta are all 0. Therefore, I want to split the video using the tee component, and then one group will use the nvstreammux component for recognition, while the other group will not use the nvstreammux component for video recording. so this way is ok ?
The nvstreammux
is to optimize the inference of multiple cameras putting the frames/buffers in batches, but it's need even using only one stream. If you are doing inference in only one strteam, you don't need the nvmultistreamtiler
. If you use the tee
before the nvstreammux
for the output elements, you will not be able to get the video output with the bboxes (from the nvdsosd
).
When I use your project in the DeepStream Python SDK, it works fine for image recognition when using the nvstreammux component. However, I'm using a single video source and not using the nvstreammux component. When I try to perform recognition this way, I encounter the following error: 0:00:08.984108832 29549 0x7f01ff4850 WARN nvinfer gstnvinfer.cpp:1428:gst_nvinfer_process_full_frame: error: NvDsBatchMeta not found for input buffer.
Error: gst-stream-error-quark: NvDsBatchMeta not found for input buffer. (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1428): gst_nvinfer_process_full_frame (): /GstPipeline:pipeline0/GstNvInfer:daytimelamp_inference
can you help me?