Using winsys: x11
ERROR: Deserialize engine failed because file path: /opt/nvidia/DeepStream-Yolo-Seg/S_Kerbhit_YoloV8.onnx_b1_gpu0_fp32.engine open error
0:00:06.595090589 6736 0x7f1c001f80 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() [UID = 1]: deserialize engine from file :/opt/nvidia/DeepStream-Yolo-Seg/S_Kerbhit_YoloV8.onnx_b1_gpu0_fp32.engine failed
0:00:06.596617444 6736 0x7f1c001f80 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/DeepStream-Yolo-Seg/S_Kerbhit_YoloV8.onnx_b1_gpu0_fp32.engine failed, try rebuild
0:00:06.596735936 6736 0x7f1c001f80 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.
ERROR: [TRT]: ModelImporter.cpp:773: While parsing node number 290 [RoiAlign -> "/1/RoiAlign_output_0"]:
ERROR: [TRT]: ModelImporter.cpp:774: --- Begin node ---
ERROR: [TRT]: ModelImporter.cpp:775: input: "/0/model.22/proto/cv3/act/Mul_output_0"
input: "/1/Gather_5_output_0"
input: "/1/Gather_1_output_0"
output: "/1/RoiAlign_output_0"
name: "/1/RoiAlign"
op_type: "RoiAlign"
attribute {
name: "coordinate_transformation_mode"
s: "half_pixel"
type: STRING
}
attribute {
name: "mode"
s: "avg"
type: STRING
}
attribute {
name: "output_height"
i: 160
type: INT
}
attribute {
name: "output_width"
i: 160
type: INT
}
attribute {
name: "sampling_ratio"
i: 0
type: INT
}
attribute {
name: "spatial_scale"
f: 0.25
type: FLOAT
}
ERROR: [TRT]: ModelImporter.cpp:776: --- End node ---
ERROR: [TRT]: ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:4870 In function importFallbackPluginImporter:
[8] Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"
ERROR: Failed to parse onnx file
ERROR: failed to build network since parsing model errors.
ERROR: failed to build network.
0:00:08.599046925 6736 0x7f1c001f80 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() [UID = 1]: build engine file failed
0:00:08.600637322 6736 0x7f1c001f80 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() [UID = 1]: build backend context failed
0:00:08.600739668 6736 0x7f1c001f80 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() [UID = 1]: generate backend failed, check config file settings
0:00:08.601245199 6736 0x7f1c001f80 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:08.601298586 6736 0x7f1c001f80 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Config file path: /opt/nvidia/DeepStream-Yolo-Seg/config_infer_primary_yoloV5_seg.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: : Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /opt/nvidia/DeepStream-Yolo-Seg/config_infer_primary_yoloV5_seg.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed
#####################################################################################################
Please find attached the onnx graph.
Hardware Platform (Jetson / GPU) = Jetson nano DeepStream Version = 6.0.1 JetPack Version (valid for Jetson only) = 4.6.4 TensorRT Version = 8.2.1.8-1+cuda10.2 Python version = 3.6.9
I have followed this link and perform all the steps https://github.com/marcoslucianops/DeepStream-Yolo-Seg/blob/master/docs/YOLOv8_Seg.md 1
I have converted my custom yolov8-seg model into onnx using
python3 export_yoloV8.py -w yolov8s.pt --simplify
While running this command
deepstream-app -c deepstream_app_config.txt
i am getting this output: #####################################################################################################
root@ubuntu:/opt/nvidia/DeepStream-Yolo-Seg# deepstream-app -c deepstream_app_config.txt
Using winsys: x11 ERROR: Deserialize engine failed because file path: /opt/nvidia/DeepStream-Yolo-Seg/S_Kerbhit_YoloV8.onnx_b1_gpu0_fp32.engine open error 0:00:06.595090589 6736 0x7f1c001f80 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() [UID = 1]: deserialize engine from file :/opt/nvidia/DeepStream-Yolo-Seg/S_Kerbhit_YoloV8.onnx_b1_gpu0_fp32.engine failed
0:00:06.596617444 6736 0x7f1c001f80 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/DeepStream-Yolo-Seg/S_Kerbhit_YoloV8.onnx_b1_gpu0_fp32.engine failed, try rebuild
0:00:06.596735936 6736 0x7f1c001f80 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.
ERROR: [TRT]: ModelImporter.cpp:773: While parsing node number 290 [RoiAlign -> "/1/RoiAlign_output_0"]:
ERROR: [TRT]: ModelImporter.cpp:774: --- Begin node ---
ERROR: [TRT]: ModelImporter.cpp:775: input: "/0/model.22/proto/cv3/act/Mul_output_0"
input: "/1/Gather_5_output_0"
input: "/1/Gather_1_output_0"
output: "/1/RoiAlign_output_0"
name: "/1/RoiAlign"
op_type: "RoiAlign"
attribute {
name: "coordinate_transformation_mode"
s: "half_pixel"
type: STRING
}
attribute {
name: "mode"
s: "avg"
type: STRING
}
attribute {
name: "output_height"
i: 160
type: INT
}
attribute {
name: "output_width"
i: 160
type: INT
}
attribute {
name: "sampling_ratio"
i: 0
type: INT
}
attribute {
name: "spatial_scale"
f: 0.25
type: FLOAT
}
ERROR: [TRT]: ModelImporter.cpp:776: --- End node --- ERROR: [TRT]: ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:4870 In function importFallbackPluginImporter: [8] Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?" ERROR: Failed to parse onnx file ERROR: failed to build network since parsing model errors. ERROR: failed to build network. 0:00:08.599046925 6736 0x7f1c001f80 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() [UID = 1]: build engine file failed
0:00:08.600637322 6736 0x7f1c001f80 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() [UID = 1]: build backend context failed
0:00:08.600739668 6736 0x7f1c001f80 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() [UID = 1]: generate backend failed, check config file settings
0:00:08.601245199 6736 0x7f1c001f80 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:08.601298586 6736 0x7f1c001f80 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Config file path: /opt/nvidia/DeepStream-Yolo-Seg/config_infer_primary_yoloV5_seg.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: : Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /opt/nvidia/DeepStream-Yolo-Seg/config_infer_primary_yoloV5_seg.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed
##################################################################################################### Please find attached the onnx graph.