marcoslucianops / DeepStream-Yolo-Pose

NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 application for YOLO-Pose models
MIT License
109 stars 25 forks source link

ERROR: decodebin did not pick NVIDIA decoder plugin #8

Open avBuffer opened 8 months ago

avBuffer commented 8 months ago

1> run: ~/work/yolo_deepstream/DeepStream-Yolo-Pose$ ./deepstream -s file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 -c config_infer_primary_yoloV8_pose.txt

2> error logs: SOURCE: file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 CONFIG_INFER: config_infer_primary_yoloV8_pose.txt STREAMMUX_BATCH_SIZE: 1 STREAMMUX_WIDTH: 1920 STREAMMUX_HEIGHT: 1080 GPU_ID: 0 PERF_MEASUREMENT_INTERVAL_SEC: 5 JETSON: FALSE

gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so [NvMultiObjectTracker] Initialized WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1487 Deserialize engine failed because file path: /home/work/yolo_deepstream/DeepStream-Yolo-Pose/yolov8s-pose.onnx_b1_gpu0_fp32.engine open error 0:00:03.835371860 17386 0x5629562a3600 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() [UID = 1]: deserialize engine from file :/home/work/yolo_deepstream/DeepStream-Yolo-Pose/yolov8s-pose.onnx_b1_gpu0_fp32.engine failed 0:00:03.836187181 17386 0x5629562a3600 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() [UID = 1]: deserialize backend context from engine from file :/home/work/yolo_deepstream/DeepStream-Yolo-Pose/yolov8s-pose.onnx_b1_gpu0_fp32.engine failed, try rebuild 0:00:03.836206038 17386 0x5629562a3600 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() [UID = 1]: Trying to create engine from model files WARNING: [TRT]: onnx2trt_utils.cpp:369: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1. WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1. 0:01:19.504473333 17386 0x5629562a3600 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() [UID = 1]: serialize cuda engine to file: /home/work/yolo_deepstream/DeepStream-Yolo-Pose/yolov8s-pose.onnx_b1_gpu0_fp32.engine successfully WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1. INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2 0 INPUT kFLOAT images 3x640x640
1 OUTPUT kFLOAT output0 56x8400

0:01:19.513500180 17386 0x5629562a3600 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:config_infer_primary_yoloV8_pose.txt sucessfully

DEBUG: FPS of stream 1: 0.00 (0.00) ERROR: decodebin did not pick NVIDIA decoder plugin DEBUG: FPS of stream 1: 0.00 (0.00) DEBUG: FPS of stream 1: 0.00 (0.00)