marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.1 / 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.5k stars 362 forks source link

Yolov5: ERROR: Failed to get cuda engine from custom library API #547

Open flmello opened 5 months ago

flmello commented 5 months ago

• Hardware Platform (Jetson / GPU) Jetson nano Devkit • DeepStream Version 6.0.0 • JetPack Version (valid for Jetson only) 4.6 • TensorRT Version 8.2.1.8

I have an script running on Jetson Xavier AGX, DS 6.3.0, JetPack 5.1, TensorRT 8.5.2.2. But when I tranfer this script to a nano devkit (specs above) I got error during the ONNX model conversion:

gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so gstnvtracker: Batch processing is ON gstnvtracker: Past frame output is OFF [NvMultiObjectTracker] Initialized ERROR: Deserialize engine failed because file path: /home/ubuntu/EdgeServer/model_b4_gpu0_fp32.engine open error 0:00:05.945769334 8805 0x2fd0a8f0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() [UID = 1]: deserialize engine from file :/home/ubuntu/EdgeServer/model_b4_gpu0_fp32.engine failed 0:00:05.946900129 8805 0x2fd0a8f0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() [UID = 1]: deserialize backend context from engine from file :/home/ubuntu/EdgeServer/model_b4_gpu0_fp32.engine failed, try rebuild 0:00:05.946948932 8805 0x2fd0a8f0 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() [UID = 1]: Trying to create engine from model files WARNING: [TRT]: onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. ERROR: [TRT]: ModelImporter.cpp:773: While parsing node number 217 [Range -> "349"]: ERROR: [TRT]: ModelImporter.cpp:774: --- Begin node --- ERROR: [TRT]: ModelImporter.cpp:775: input: "347" input: "346" input: "348" output: "349" name: "Range_217" op_type: "Range"

ERROR: [TRT]: ModelImporter.cpp:776: --- End node --- ERROR: [TRT]: ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:3352 In function importRange: [8] Assertion failed: inputs.at(0).isInt32() && "For range operator with dynamic inputs, this version of TensorRT only supports INT32!"

Could not parse the ONNX model

Failed to build CUDA engine ERROR: Failed to create network using custom network creation function ERROR: Failed to get cuda engine from custom library API 0:00:07.080241677 8805 0x2fd0a8f0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() [UID = 1]: build engine file failed 0:00:07.081381742 8805 0x2fd0a8f0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() [UID = 1]: build backend context failed 0:00:07.081462525 8805 0x2fd0a8f0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() [UID = 1]: generate backend failed, check config file settings 0:00:07.081547838 8805 0x2fd0a8f0 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Failed to create NvDsInferContext instance 0:00:07.081580286 8805 0x2fd0a8f0 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Config file path: /home/ubuntu/EdgeServer/config/dstest4_pgie_nvinfer_yolov5_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED [NvMultiObjectTracker] De-initialized Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-gpu-inference-engine: Config file path: /home/ubuntu/EdgeServer/config/dstest4_pgie_nvinfer_yolov5_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED Exiting app

--- 0.013864755630493164 seconds ---

Note that I had success in compiling nvdsinfer_custom_impl_Yolo with the correct Cuda version: CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo

The libnvdsinfer_custom_impl_Yolo.so path is correct in my config file dstest4_pgie_nvinfer_yolov5_config.txt.

There is something tricky here, but I couldn't find out. Does anyone can give me a tip of what is going on?

marcoslucianops commented 2 days ago

Export the ONNX model without --dynamic.