marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.38k stars 343 forks source link

Nvidia Deepstream 6.1 - Segmentation fault (core dumped) #543

Open fasih0001 opened 4 weeks ago

fasih0001 commented 4 weeks ago

I used the same yolov5 configuration files provided in this https://github.com/marcoslucianops/DeepStream-Yolo.git with my own custom trained yolov5. Initially, I exported the .pt weight file to .onnx using the export.py file in the official ultralytics yolov5 github reepository and using the following settings:

 python export.py --weights best.pt --include onnx --simplify --half --device

Then the created file best.onnx was converted to the .engine file format using the following command:

trtexec --onnx=best.onnx --saveEngine=simplifiedBest.engine

I have updated the configuration files respectively as follows: config_infer_primary_yoloV5.txt

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
custom-network-config=/opt/nvidia/deepstream/deepstream/sources/objectDetector_Yolo/setup.cfg   
onnx-file=best.onnx 
model-engine-file=simplifiedBest.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
network-mode=0
num-detected-classes=4
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
#workspace-size=2000
parse-bbox-func-name=NvDsInferParseYolo
#parse-bbox-func-name=NvDsInferParseYoloCuda
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300

deepstream_app_config.txt

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=1
type=3
uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/test.mp4
num-sources=1
gpu-id=0
cudadec-memtype=0

[sink0]
enable=1
type=2
sync=0
gpu-id=0
nvbuf-memory-type=0

[osd]
enable=1
gpu-id=0
border-width=5
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
live-source=0
batch-size=1
batched-push-timeout=40000
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_yoloV5.txt

[tests]
file-loop=0

When I ran it with the following configurations using the command:

deepstream-app -c deepstream_app_config.txt

The following segmentation fault occured:

0:00:01.454182397 272428 0x55ee5582ff90 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/sources/DeepStream-Yolo/simplifiedBest.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0   INPUT  kHALF  images          3x640x640       
1   OUTPUT kHALF  output0         25200x9         

0:00:01.454913311 272428 0x55ee5582ff90 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2003> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/sources/DeepStream-Yolo/simplifiedBest.engine
0:00:01.459020776 272428 0x55ee5582ff90 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.1/sources/DeepStream-Yolo/config_infer_primary_yoloV5.txt sucessfully

Runtime commands:
    h: Print this help
    q: Quit

    p: Pause
    r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.

** INFO: <bus_callback:194>: Pipeline ready

** INFO: <bus_callback:180>: Pipeline running

Segmentation fault (core dumped)

Can someone suggest a fix to this problem as I am unable to understand what is causing the Segmentation fault error?

marcoslucianops commented 4 weeks ago

You didn't follow the instructions on the docs/YOLOv5.md.

fasih0001 commented 4 weeks ago

I actually did follow the instructions, still this error occurs.

marcoslucianops commented 4 weeks ago

You need to generate the onnx file using the exporter file available on this repo. And generate the engine from the onnx file you exported.