marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.45k stars 357 forks source link

At first, high FPS then gradually approaches 0 #342

Closed LopezJER closed 1 year ago

LopezJER commented 1 year ago

Hello,

I'm using DeepStream-YOLO with INT8 Calibration on my Jetson Xavier NX. I've tested inferring on both a local video file and RTSP. In both cases, I've observed my pipeline logging high FPS for the first few minutes but then gradually approaches 0. Here is a sample output:

** INFO: : Pipeline running

PERF: 45.99 (45.96)
PERF: 51.06 (48.61)
PERF: 51.19 (49.50)
PERF: 51.14 (49.94)
PERF: 50.19 (49.99)
PERF: 50.15 (49.99)
PERF: 49.91 (49.99)
PERF: 48.76 (49.84)
PERF: 46.18 (49.41)
PERF: 26.35 (47.08)
PERF: 26.20 (45.16)
PERF: 25.31 (43.50)
PERF: 24.17 (42.00)
PERF: 19.25 (40.36)
PERF: 16.39 (38.75)
PERF: 15.63 (37.30)
PERF: 12.17 (35.80)
PERF: 12.38 (34.49)
**PERF: 12.19 (33.31)

PERF: FPS 0 (Avg)
PERF: 8.95 (32.09)

Here is my deepstream configuration:

[application] enable-perf-measurement=1 perf-measurement-interval-sec=5

[tiled-display] enable=0 rows=1 columns=1 width=640 height=640 gpu-id=0 nvbuf-memory-type=0

[source0] enable=1 type=2

type=4

uri=file:///home/roamercv/Videos/banana_vid_3_29fps.mp4

uri=rtsp://0.0.0.0:8554/front

num-sources=1 gpu-id=0 cudadec-memtype=0

select-rtp-protocol=0

[sink0] enable=1 type=2 sync=0 gpu-id=0 nvbuf-memory-type=0

[osd] enable=1 gpu-id=0 border-width=5 text-size=15 text-color=1;1;1;1; text-bg-color=0.3;0.3;0.3;1 font=Serif show-clock=0 clock-x-offset=800 clock-y-offset=820 clock-text-size=12 clock-color=1;0;0;0 nvbuf-memory-type=0

[streammux] gpu-id=0 live-source=1 batch-size=1 batched-push-timeout=40000 width=640 height=640 enable-padding=0 nvbuf-memory-type=0

[primary-gie] enable=1 gpu-id=0 gie-unique-id=1 nvbuf-memory-type=0 config-file=config_infer_primary.txt

[tests] file-loop=1

What could be causing the dropping in FPS and how could I fix it? Thank you for your help.

LopezJER commented 1 year ago

It was an issue of temperature. Had to test in a well-ventilated area, and it improved the inference.