marcoslucianops / DeepStream-Yolo-Seg

NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 implementation for YOLO-Segmentation models
MIT License
46 stars 6 forks source link

Segmentation fault (core dumped) #10

Open avBuffer opened 8 months ago

avBuffer commented 8 months ago

1> run: ~/work/yolo_deepstream/DeepStream-Yolo-Seg$ deepstream-app -c deepstream_app_config.txt

2> error logs: WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1487 Deserialize engine failed because file path: /home/work/yolo_deepstream/DeepStream-Yolo-Seg/yolov8s-seg.onnx_b1_gpu0_fp32.engine open error 0:00:02.666741042 13265 0x55aeb942c120 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() [UID = 1]: deserialize engine from file :/home/work/yolo_deepstream/DeepStream-Yolo-Seg/yolov8s-seg.onnx_b1_gpu0_fp32.engine failed 0:00:02.667629067 13265 0x55aeb942c120 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() [UID = 1]: deserialize backend context from engine from file :/home/work/yolo_deepstream/DeepStream-Yolo-Seg/yolov8s-seg.onnx_b1_gpu0_fp32.engine failed, try rebuild 0:00:02.667649670 13265 0x55aeb942c120 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() [UID = 1]: Trying to create engine from model files WARNING: [TRT]: onnx2trt_utils.cpp:369: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1. WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1. 0:00:58.957847518 13265 0x55aeb942c120 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() [UID = 1]: serialize cuda engine to file: /home/work/yolo_deepstream/DeepStream-Yolo-Seg/yolov8s-seg.onnx_b1_gpu0_fp32.engine successfully WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1. INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3 0 INPUT kFLOAT images 3x640x640
1 OUTPUT kFLOAT output1 32x160x160
2 OUTPUT kFLOAT output0 116x8400

0:00:58.974206457 13265 0x55aeb942c120 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:/home/titanx/work/yolo_deepstream/DeepStream-Yolo-Seg/config_infer_primary_yoloV8_seg.txt sucessfully

Runtime commands: h: Print this help q: Quit

p: Pause
r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source. To go back to the tiled display, right-click anywhere on the window.

PERF: FPS 0 (Avg)
PERF: 0.00 (0.00)
** INFO: : Pipeline ready

** INFO: : Pipeline running

Segmentation fault (core dumped)

Abhijeet241093 commented 1 week ago

Experiencing the same issue on Jetson Nano (using DeepStream 7.0).

*PERF: FPS 0 (Avg) PERF: 0.00 (0.00) INFO: : Pipeline ready

Opening in BLOCKING MODE NvMMLiteOpen : Block : BlockType = 261 NvMMLiteBlockCreate : Block : BlockType = 261 ** INFO: : Pipeline running

Segmentation fault (core dumped)