marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.39k stars 344 forks source link

[Inference Error] Segmentation Fault in Jetson Nano #408

Open anjaeju opened 12 months ago

anjaeju commented 12 months ago

Hello, Thanks for the great project for running yolo series with deepstream library. I'm experiencing an error when trying to perform inference using Deepstream-yolo on a Jetson Nano (4GB).

Problem Description The main issue seems to appear Deepstream library building error and an error related to the Jetson Nano device. When attempting to perform inference, the system crashes with a "Segmentation Fault".

Steps to Reproduce The problem occurs when I perform the following steps:

Tried Solutions I have tried several solutions to mitigate the problem:

Request I would greatly appreciate any suggestions or guidance on how to fix these issues. I'm particularly interested in understanding why the system crashes with a Segmentation Fault when attempting inference.

Environment TensorRT Version: 8.2.1.9+cuda10.2 GPU Type: Jetson Nano Tegra210 JetPack : 4.6.1 CUDA Version: 10.2 CUDNN Version: 8.2.1+cuda10.2 arm64 Operating System + Version: Ubuntu 18.04 Python Version (if applicable): 3.8 (conda virtual enviroment) TensorFlow Version (if applicable): NA PyTorch Version (if applicable): 1.11.0

Configuration file deepstream_app_config.txt

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=1
type=3
#uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
uri=file:///home/bass/DeepStream-Yolo/test_video/10sec_1080p.mp4
num-sources=1
gpu-id=0
cudadec-memtype=0

[sink0]
enable=0 #1
type=2
sync=0
gpu-id=0
nvbuf-memory-type=0

[osd]
enable=1
gpu-id=0
border-width=5
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
live-source=0
batch-size=1
batched-push-timeout=40000
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_yoloV8.txt

[tests]
file-loop=0

config_infer_primary_yoloV8.txt

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
#onnx-file=yolov8n_best.onnx
model-engine-file=yolov8n_best.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
network-mode=0
num-detected-classes=1
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
#force-implicit-batch-dim=1
#workspace-size=1000
parse-bbox-func-name=NvDsInferParseYolo
#parse-bbox-func-name=NvDsInferParseYoloCuda
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300
marcoslucianops commented 11 months ago

Can you send the log from the terminal?