NVIDIA-AI-IOT / yolo_deepstream

yolo model qat and deploy with deepstream&tensorrt
Apache License 2.0
533 stars 135 forks source link

Video Artifacts During Streaming with DeepStream #54

Open GDbbq opened 7 months ago

GDbbq commented 7 months ago

Hello DeepStream Team,

I'm experiencing video artifacts, such as glitching or "garbled" frames, when streaming video using the DeepStream SDK. Attached is a screenshot illustrating the issue I'm encountering.

Could you please help me identify the potential causes for this behavior? What steps can I take to resolve this issue?

Here is the relevant section of my application configuration file for reference:

################################################################################
# Copyright (c) 2018-2022, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source-list]
num-source-bins=16
list=rtsp://192.168.99.151:8554/mystream9;rtsp://192.168.99.151:8554/mystream10;rtsp://192.168.99.151:8554/mystream11;rtsp://192.168.99.151:8554/mystream12;rtsp://192.168.99.151:8554/mystream19;rtsp://192.168.99.151:8554/mystream20;rtsp://192.168.99.151:8554/mystream21;rtsp://192.168.99.151:8554/mystream22;rtsp://192.168.99.151:8554/mystream23;rtsp://192.168.99.151:8554/mystream24;rtsp://192.168.99.151:8554/mystream25;rtsp://192.168.99.151:8554/mystream26;rtsp://192.168.99.151:8554/mystream27;rtsp://192.168.99.151:8554/mystream28;rtsp://192.168.99.151:8554/mystream29;rtsp://192.168.99.151:8554/mystream30
sgie-batch-size=16

[source-attr-all]
enable=1
type=4
num-sources=1
gpu-id=0
cudadec-memtype=0
latency=1500
rtsp-reconnect-interval-sec=0

[sink0]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvdrmvideosink 6=MsgConvBroker
type=6

#   msg-conv-
msg-conv-config=/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream-test6/configs/dstest6_msgconv_sample_config.yml

#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=257
msg-conv-msg2p-new-api=1
msg-conv-frame-interval=25

msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
#  Provide your msg-broker-conn-str here
msg-broker-conn-str=192.168.11.224;9092;dstest
topic=dstest
#Optional:
#msg-broker-config=../../deepstream-test4/cfg_kafka.txt

[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=nvdrmvideosink
type=4
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=1
bitrate=10000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
# set profile only for hw encoder, sw encoder selects profile based on sw-preset
profile=4
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=16
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=100000
## Set muxer output width and height
width=1280
height=720
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=1
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1

[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
batch-size=16
config-file=/opt/nvidia/deepstream/deepstream-6.3/samples/configs/DeepStream-Yolo/config_infer_primary_yoloV5lite-g-fire.txt

[tests]
file-loop=1

[Insert relevant sections of your configuration file here] Additional details:

Thank you for your assistance.

image image