marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.47k stars 355 forks source link

I am trying to load multiple engine. Primary engine failing #572

Open Shehjad-Ishan opened 3 weeks ago

Shehjad-Ishan commented 3 weeks ago
deepstream-app -c deepstream_app_config.txt --gst-fatal-warnings
Warn: 'threshold' parameter has been deprecated. Use 'pre-cluster-threshold' instead.
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1487 Deserialize engine failed because file path: /media/sigmind/URSTP_HDD1414/DeepStream-Yolo/gie1/model_b4_gpu0_fp32.engine open error
0:00:06.385676825 61470 0x5588b390b410 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1976> [UID = 1]: deserialize engine from file :/media/sigmind/URSTP_HDD1414/DeepStream-Yolo/gie1/model_b4_gpu0_fp32.engine failed
0:00:06.533558860 61470 0x5588b390b410 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2081> [UID = 1]: deserialize backend context from engine from file :/media/sigmind/URSTP_HDD1414/DeepStream-Yolo/gie1/model_b4_gpu0_fp32.engine failed, try rebuild
0:00:06.533591201 61470 0x5588b390b410 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:400: One or more weights outside the range of INT32 was clamped

Building the TensorRT Engine

Building complete

0:03:59.025932270 61470 0x5588b390b410 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2034> [UID = 1]: serialize cuda engine to file: /media/sigmind/URSTP_HDD1414/DeepStream-Yolo/model_b4_gpu0_fp32.engine successfully
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [FullDims Engine Info]: layers num: 3
0   INPUT  kFLOAT input           3x608x608       min: 1x3x608x608     opt: 4x3x608x608     Max: 4x3x608x608     
1   OUTPUT kFLOAT boxes           22743x1x4       min: 0               opt: 0               Max: 0               
2   OUTPUT kFLOAT confs           22743x2         min: 0               opt: 0               Max: 0               

0:03:59.264043486 61470 0x5588b390b410 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/media/sigmind/URSTP_HDD1414/DeepStream-Yolo/gie1/config_infer_primary.txt sucessfully

Runtime commands:
    h: Print this help
    q: Quit

    p: Pause
    r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.

**PERF:  FPS 0 (Avg)    
**PERF:  0.00 (0.00)    
** INFO: <bus_callback:239>: Pipeline ready

** INFO: <bus_callback:225>: Pipeline running

Segmentation fault (core dumped)
primary engine config:

`[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
custom-network-config=gie1/yolo-obj.cfg
model-file=gie1/yolo-obj_v3.weights
onnx-file=yolov4_-1_3_608_608_dynamic.onnx
model-engine-file=model_b4_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels_cus.txt
batch-size=4
network-mode=0
num-detected-classes=2
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=0
symmetric-padding=1
force-implicit-batch-dim=0
#workspace-size=2000
parse-bbox-func-name=NvDsInferParseYolo
#parse-bbox-func-name=NvDsInferParseYoloCuda
custom-lib-path=/media/sigmind/URSTP_HDD1414/DeepStream-Yolo/gie1/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
threshold=1.2

[class-attrs-1]
nms-iou-threshold=0.3
pre-cluster-threshold=0.3
topk=100

`
`
[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=gie1/config_infer_primary.txt

[secondary-gie0]
enable=0
gpu-id=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=1
nvbuf-memory-type=0
config-file=FAN_URSTP/config_infer_secondary_vehicletypenet.txt

`
marcoslucianops commented 7 hours ago

You can´t use weights and onnx together

custom-network-config=gie1/yolo-obj.cfg
model-file=gie1/yolo-obj_v3.weights
onnx-file=yolov4_-1_3_608_608_dynamic.onnx

And you onnx file isn't supported on this repo. For YOLOv4, you should use only the weights and cfg files.

https://github.com/marcoslucianops/DeepStream-Yolo?tab=readme-ov-file#basic-usage