Open eljinwei opened 1 week ago
[application] enable-perf-measurement=1 perf-measurement-interval-sec=5
[tiled-display] enable=1 rows=1 columns=1 width=1280 height=720 gpu-id=0 nvbuf-memory-type=0
[source0] enable=1 type=3 uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 num-sources=1 gpu-id=0 cudadec-memtype=0
[sink0] enable=1 type=2 sync=0 gpu-id=0 nvbuf-memory-type=0
[osd] enable=1 gpu-id=0 border-width=5 text-size=15 text-color=1;1;1;1; text-bg-color=0.3;0.3;0.3;1 font=Serif show-clock=0 clock-x-offset=800 clock-y-offset=820 clock-text-size=12 clock-color=1;0;0;0 nvbuf-memory-type=0
[streammux] gpu-id=0 live-source=0 batch-size=1 batched-push-timeout=40000 width=1920 height=1080 enable-padding=0 nvbuf-memory-type=0
[primary-gie] enable=1 gpu-id=0 gie-unique-id=1 nvbuf-memory-type=0 config-file=config_infer_primary_yoloV5.txt
[tests] file-loop=0
[property] gpu-id=0 net-scale-factor=0.0039215697906911373 model-color-format=0 onnx-file=last.pt.onnx model-engine-file=model_b1_gpu0_fp32.engine
labelfile-path=labels_wheat.txt batch-size=1 network-mode=0 num-detected-classes=2 interval=0 gie-unique-id=1 process-mode=1 network-type=0 cluster-mode=2 maintain-aspect-ratio=1 symmetric-padding=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so engine-create-func-name=NvDsInferYoloCudaEngineGet
[class-attrs-all] nms-iou-threshold=0.45 pre-cluster-threshold=0.25 topk=300
ubuntu18.04 CUDA:10.2 CUDNN:8.2.1.32 TensorRT:8.2.1.8 OPENCV:4.1.1 jetpack:4.6.1 Deepstream6.0
@marcoslucianops can you help me ? thanks!
You need to export the ONNX file without the --dynamic
and you need to set --opset 12
or lower for the old Jetson Nano board.
Hi Marcos,
I am a newcomer and am learning to deploy my model using your open-source project DeepStream-YOLO. I would be greateful for your help! I have read the code in nvdsinfer_custom_impl_Yolo
, and nvdsparsedbox_Yolo.cpp
parses the output of the YOLO model to generate bounding boxes and text, but it does not involve drawing the bounding boxes onto the original video. I would like to understand how this pipeline is organized, but I have not found the relevant code in the project files. Perhaps you could provide some guidance? The official examples, such as deepstream-test1
, have a deepstream_test1_app.c
file where you can see how the pipeline is designed, but in the DeepStream-YOLO project, I have not found the corresponding code, nor have I found where nvosd
is used subsequently. I would appreciate it if you could clarify my doubts; Thanks!
What I would like to do is to save the video frames with detection boxes drawn after detection. My idea is to locate the OSD pad and add a callback function (I'm not sure if this approach is feasible). However, I am currently facing the issues mentioned above. The code in the project is all about constructing the TRT (TensorRT) engine and parsing the output layers. I am not sure where the parsed data goes after that, so I am writing to you for assistance(I am currently able to run the engine using deepstream-yolo project and invoke the camera for detection.). I wonder if you could provide me with some guidance, which I would greatly appreciate!
------------------ 原始邮件 ------------------ 发件人: "marcoslucianops/DeepStream-Yolo" @.>; 发送时间: 2024年11月14日(星期四) 晚上10:58 @.>; @.**@.>; 主题: Re: [marcoslucianops/DeepStream-Yolo] use deepstream to run yolov5s ERROR: <main:707>: Failed to set pipeline to PAUSED;ERROR: Failed to create network using custom network creation function (Issue #583)
You need to export the ONNX file without the --dynamic and you need to set --opset 12 or lower for the old Jetson Nano board.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
I am trying to run YOLOV5 on Jetson Nano. I have converted the yolov5_last.pt file into onnx format. Then I updated the “config_infer_primary_yoloV5.txt” with following settings:
but when i run it using deepstream-app -c deepstream_app_config.txt, it gives me the following error: Using winsys: x11 ERROR: Deserialize engine failed because file path: /home/jetson/DeepStream-Yolo-new/model_b1_gpu0_fp32.engine open error 0:00:02.752670502 10643 0x7f140022a0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() [UID = 1]: deserialize engine from file :/home/jetson/DeepStream-Yolo-new/model_b1_gpu0_fp32.engine failed
0:00:02.753780996 10643 0x7f140022a0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() [UID = 1]: deserialize backend context from engine from file :/home/jetson/DeepStream-Yolo-new/model_b1_gpu0_fp32.engine failed, try rebuild
0:00:02.753828601 10643 0x7f140022a0 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
ERROR: [TRT]: [graph.cpp::computeInputExecutionUses::549] Error Code 9: Internal Error (/0/model.11/Floor_1: IUnaryLayer cannot be used to compute a shape tensor)
ERROR: [TRT]: ModelImporter.cpp:773: While parsing node number 145 [Resize -> "/0/model.11/Resize_output_0"]:
ERROR: [TRT]: ModelImporter.cpp:774: --- Begin node ---
ERROR: [TRT]: ModelImporter.cpp:775: input: "/0/model.10/act/Mul_output_0"
input: ""
input: ""
input: "/0/model.11/Concat_1_output_0"
output: "/0/model.11/Resize_output_0"
name: "/0/model.11/Resize"
op_type: "Resize"
attribute {
name: "coordinate_transformation_mode"
s: "asymmetric"
type: STRING
}
attribute {
name: "cubic_coeff_a"
f: -0.75
type: FLOAT
}
attribute {
name: "mode"
s: "nearest"
type: STRING
}
attribute {
name: "nearest_mode"
s: "floor"
type: STRING
}
ERROR: [TRT]: ModelImporter.cpp:776: --- End node --- ERROR: [TRT]: ModelImporter.cpp:779: ERROR: ModelImporter.cpp:179 In function parseGraph: [6] Invalid Node - /0/model.11/Resize [graph.cpp::computeInputExecutionUses::549] Error Code 9: Internal Error (/0/model.11/Floor_1: IUnaryLayer cannot be used to compute a shape tensor)
Could not parse the ONNX file
Failed to build CUDA engine ERROR: Failed to create network using custom network creation function ERROR: Failed to get cuda engine from custom library API 0:00:03.694147612 10643 0x7f140022a0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() [UID = 1]: build engine file failed
0:00:03.695261701 10643 0x7f140022a0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() [UID = 1]: build backend context failed
0:00:03.695322327 10643 0x7f140022a0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() [UID = 1]: generate backend failed, check config file settings
0:00:03.695384360 10643 0x7f140022a0 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:03.695414204 10643 0x7f140022a0 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Config file path: /home/jetson/DeepStream-Yolo-new/config_infer_primary_yoloV5.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: : Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /home/jetson/DeepStream-Yolo-new/config_infer_primary_yoloV5.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed
could anyone suggest what is the problem. I am exactly following the instructions but still getting the error. Thanks!