marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.42k stars 354 forks source link

Segmentation fault (core dumped) #208

Closed formerlya closed 1 year ago

formerlya commented 2 years ago

image yolov5 v5.0 tensorrt v5.0 when I use the .wts .engine .cfg made by tensorrt, in tensorrt it test good. ①When I put them into deepstream image and use deepstream-app -c deepstream_app_config.txt image ②When I put them and load libmyplugins.so and use LD_PRELOAD=./libmyplugins.so deepstream-app -c deepstream_app_config.txt image image Then image …… I don't know why……

marcoslucianops commented 2 years ago

https://github.com/marcoslucianops/DeepStream-Yolo#requirements

https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/docs/YOLOv5.md

formerlya commented 2 years ago

The file generated by tensorrt can't work for deepstream? But, I see version 5.1 deepstream tutorial can realize, 6.0 series can't? jetson+yolov5+tenssort+deepstream6.0's imformation is too little to learn, pure beginner. o(╥﹏╥)o

marcoslucianops commented 2 years ago

It should not work in the latest repo files. Please use the gen_wts_yoloV5.py file to convert the YOLOv5 models.

tuneshverma commented 1 year ago

Hi, I am having a "segmentation fault" error. This error comes only a few times. Rest of the time the pipeline works fine. The error comes when the pipeline loads again for processing when we send it a message.

[Ubuntu 20.04] [CUDA 11.6 ]

[NVIDIA Driver 510.47.03] [NVIDIA DeepStream SDK 6.1] [GStreamer 1.16.2] [DeepStream-Yolo]

0:00:52.974549997 140 0x449bd00 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() [UID = 2]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/instagng_ds_inventory/test2.onnx_b1_gpu0_fp32.engine WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1. INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3 0 INPUT kFLOAT input:0 3x224x224
1 OUTPUT kFLOAT dropout_1 128
2 OUTPUT kFLOAT dense 29

0:00:52.975675261 140 0x449bd00 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() [UID = 2]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/instagng_ds_inventory/test2.onnx_b1_gpu0_fp32.engine 0:00:52.976119747 140 0x449bd00 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 2]: Load new model:test_classifier_config_dli.txt successfully Segmentation fault (core dumped)

Ideally after "nvinference-engine> [UID 2]: Load new model:test_classifier_config_dli.txt successfully" the following should happen.

Deserialize yoloLayer plugin: yolo 0:00:41.326322620 140 0x449bd00 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/instagng_ds_inventory/model_b2_gpu0_fp32.engine INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 5 0 INPUT kFLOAT data 3x608x608
1 OUTPUT kFLOAT num_detections 1
2 OUTPUT kFLOAT detection_boxes 22743x4
3 OUTPUT kFLOAT detection_scores 22743
4 OUTPUT kFLOAT detection_classes 22743

I believe, because of deserializing, it's giving this error. Please help!!

marcoslucianops commented 1 year ago

Can you use gdb to debug the segmentation fault?

lyj201644070230 commented 1 year ago

您好,我也遇到跟你同样的问题?请问您解决了吗?

tuneshverma commented 1 year ago

Hi, no I was not able to solve it.

marcoslucianops commented 1 year ago

Try using the new ONNX export method.

Bo-Yu-Columbia commented 1 year ago

Hi! I am also having this problem, only except that I have segmentation fault 9 times out of 10, Is there any way to fix this? I'm using custom yolov8 model, followed the steps in the project(running on DeepStream 6.2)

marcoslucianops commented 1 year ago

Can you send the output from the terminal?