marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.4k stars 345 forks source link

Error when creating yolov8 engine file. #370

Open kimcheolhee80 opened 1 year ago

kimcheolhee80 commented 1 year ago

I am working on Jetson AGX Xavier Developer Kit, Jetpack461, Deepstream6.0.1 environment.

I followed the instructions below to use yolov8. https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/docs/YOLOv8.md

yolov8s.onnx was created successfully. And the nvdsinfer_custom_impl_Yolo.so file was also created. (CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo )

Now, I tried to create an engine file after moving the ONNX file and the nvdsinfer_custom_impl_Yolo.so file to apply to my app, but an error occurred.

YoloV7 (https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/docs/YOLOv7.md) in the same environment works without problems.

Here is the log.

@ubuntu:~/aiengine/app/roadsafety_ai_mirror_v101$ ./road_safety -c ./config_robopia/videofile1_v8s.txt

DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test

Opening in BLOCKING MODE gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_nvmultiobjecttracker.so ~~ CLOG[/dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvmultiobjecttracker/include/modules/NvMultiObjectTracker/NvTrackerParams.hpp, getConfigRoot() @line 54]: [NvTrackerParams::getConfigRoot() ] !!![WARNING] Invalid low-level config file caused an exception, but will go ahead with the default config values gstnvtracker: Batch processing is ON gstnvtracker: Past frame output is ON ~~ CLOG[/dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvmultiobjecttracker/include/modules/NvMultiObjectTracker/NvTrackerParams.hpp, getConfigRoot() @line 54]: [NvTrackerParams::getConfigRoot() ] !!![WARNING] Invalid low-level config file caused an exception, but will go ahead with the default config values [NvMultiObjectTracker] Initialized ERROR: Deserialize engine failed because file path: /home/robopia/aiengine/app/roadsafety_ai_mirror_v101/config_robopia/../../models/v8/model_b1_gpu0_fp32.engine open error 0:00:02.057851442 16810 0x55c648ef80 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() [UID = 1]: deserialize engine from file:/home/robopia/aiengine/app/roadsafety_ai_mirror_v101/config_robopia/../../models/v8/model_b1_gpu0_fp32.engine failed 0:00:02.074183395 16810 0x55c648ef80 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() [UID = 1]: deserialize backend context from engine from file :/home/robopia/aiengine/app/roadsafety_ai_mirror_v101/config_robopia/../../models/v8/model_b1_gpu0_fp32.engine failed, try rebuild [ UID = 1]: Trying to create engine from model files WARNING: [TRT]: onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. ERROR: [TRT]: ModelImporter.cpp:773: While parsing node number 239 [Range -> "/0/model.22/Range_output_0"]: ERROR: [TRT]: ModelImporter.cpp:774: --- Begin node --- ERROR: [TRT]: ModelImporter.cpp:775: input: "/0/model.22/Constant_8_output_0" input: "/0/model.22/Cast_output_0" input: "/0/model.22/Constant_9_output_0" output: "/0/model.22/Range_output_0" name: "/0/model.22/Range" op_type: "Range"

ERROR: [TRT]: ModelImporter.cpp:776: --- End node --- ERROR: [TRT]: ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:3352 In function importRange: [8] Assertion failed: inputs.at(0).isInt32() && "For range operator with dynamic inputs, this version of TensorRT only supports INT32!"

Could not parse the ONNX model

Failed to build CUDA engine ERROR: Failed to create network using custom network creation function ERROR: Failed to get cuda engine from custom library API [ UID = 1]: build engine file failed 0:00:02.779122328 16810 0x55c648ef80 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:202 0> [UID = 1]: build backend context failed 0:00:02.779465803 16810 0x55c648ef80 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() [UID = 1]: generate backend failed , check config file settings 0:00:02.779562353 16810 0x55c648ef80 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Failed to create NvDsInferContext instance 0:00:02.779588498 16810 0x55

I am learning a lot here. Thank you very much for your hard work.

marcoslucianops commented 1 year ago

Hi, remove the --dynamic from the ONNX export command. Your DeepStream version (specifically the TensorRT version) doesn't support INT64 weights for dynamic shapes. Set the --batch in the ONNX exporter equal to the batch-size you will use on the DeepStream.

Kanan99 commented 8 months ago

Hello. How did you solve the above-mentioned error? Thanks in advance.

Kanan99 commented 8 months ago

Hi, remove the --dynamic from the ONNX export command. Your DeepStream version (specifically the TensorRT version) doesn't support INT64 weights for dynamic shapes. Set the --batch in the ONNX exporter equal to the batch-size you will use on the DeepStream.

İ have a similar issue, however, I'm using DeepStream 5.1 where --dynamic is not even the case.

marcoslucianops commented 7 months ago

@Kanan99, can you send more details about your issue?

IronmanVsThanos commented 6 months ago

My environment configuration: deepstream-app version 6.0.1 DeepStreamSDK 6.0.1 CUDA Driver Version: 10.2 CUDA Runtime Version: 10.2 TensorRT Version: 8.2 cuDNN Version: 8.2 libNVWarp360 Version: 2.0.1d3

pytorch:1.9.1 cu111 onnx runtime :1.16.3 onnxsim :0.4.35 onnx opset:12/11 convert to onnx command: python3 export_yolov8.py -w ./best.pt -s 416 --batch 1

Using winsys: x11 
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/DeepStream-Yolo-master-int8-test/model_b1_gpu0_int8.engine  open error
0:00:02.263745974 14787   0x7f34001f80 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/DeepStream-Yolo-master-int8-test/model_b1_gpu0_int8.engine  failed
0:00:02.283776503 14787   0x7f34001f80 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/DeepStream-Yolo-master-int8-test/model_b1_gpu0_int8.engine  failed, try rebuild
0:00:02.283854905 14787   0x7f34001f80 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: INT8 calibration file not specified/accessible. INT8 calibration can be done through setDynamicRange API in 'NvDsInferCreateNetwork' implementation
WARNING: [TRT]: onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
ERROR: [TRT]: ModelImporter.cpp:773: While parsing node number 226 [Range -> "368"]:
ERROR: [TRT]: ModelImporter.cpp:774: --- Begin node ---
ERROR: [TRT]: ModelImporter.cpp:775: input: "366"
input: "365"
input: "367"
output: "368"
name: "Range_226"
op_type: "Range"

ERROR: [TRT]: ModelImporter.cpp:776: --- End node ---
ERROR: [TRT]: ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:3352 In function importRange:
[8] Assertion failed: inputs.at(0).isInt32() && "For range operator with dynamic inputs, this version of TensorRT only supports INT32!"

Could not parse the ONNX model

Failed to build CUDA engine
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:02.892737685 14787   0x7f34001f80 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:02.912226420 14787   0x7f34001f80 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:02.912367800 14787   0x7f34001f80 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:02.912713411 14787   0x7f34001f80 WARN                 nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:02.912762276 14787   0x7f34001f80 WARN                 nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Config file path: /opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/DeepStream-Yolo-master-int8-test/config_infer_primary_yoloV8.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: <main:707>: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/DeepStream-Yolo-master-int8-test/config_infer_primary_yoloV8.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed