marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.39k stars 344 forks source link

Hello, I use YoloV8 on my Orin 16G device The converted model automatically shuts down after running for a while #413

Closed Today-fine closed 11 months ago

Today-fine commented 11 months ago

My project is based on Deepstream-test2 modifications

Pipeline flow:

appsrc ! h264parse ! nvv4l2decoder ! nvstreammux ! nvinfer ! nvtracker ! nvvidconv ! nvdsosd ! nvvideoconvert ! video/x-raw(memory:NVMM),format=I420 ! nvv4l2h264enc ! video/x-h264,stream-format=byte-stream ! h264parse ! flvmux ! rtmpsink

  1. dstest2_config.yml

    streammux:
     batch-size: 1
     batched-push-timeout: 40000
     width: 1920
     height: 1440
     live-source: 1
    
    # /root/shared_develop/develop/sqn-iot-dji-spdk/source_object/ai/deep_test2_file/dstest2_config.yml
    
    tracker:
     tracker-width: 640
     tracker-height: 384
     # tracker-width: 1280
     # tracker-height: 736
     gpu-id: 0
     ll-lib-file: ../source_object/ai/deep_test2_file/tracker_file/libnvds_nvmultiobjecttracker.so
     # ll-lib-file: /root/shared_package/ByteTrack/deploy/DeepStream/lib/libByteTracker.so
     # ll-config-file required to set different tracker types
     # ll-config-file: ../ai/deep_test2_file/tracker_file/config_tracker_IOU.yml
     # ll-config-file: ../ai/deep_test2_file/tracker_file/config_tracker_NvSORT.yml
     ll-config-file: ../source_object/ai/deep_test2_file/tracker_file/config_tracker_NvDCF_perf.yml
     # ll-config-file: ../ai/deep_test2_file/tracker_file/config_tracker_NvDCF_accuracy.yml
     # ll-config-file: ../ai/deep_test2_file/tracker_file/config_tracker_NvDeepSORT.yml
     enable-batch-process: 1
    
    # Inference using nvinfer:
    primary-gie:
     config-file-path: dstest2_pgie_config.yml
  2. dstest2_pgie_config.yml

    # Official model
    
    property:
     gpu-id:  0
     net-scale-factor: 0.0039215697906911373
     model-file: Primary_Detector/resnet10.caffemodel
     proto-file: Primary_Detector/resnet10.prototxt
     model-engine-file: Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
     labelfile-path: Primary_Detector/labels.txt
     int8-calib-file: Primary_Detector/cal_trt.bin
     force-implicit-batch-dim: 1
     batch-size: 1
     network-mode: 1
     process-mode: 1
     model-color-format: 0
     num-detected-classes: 4
     interval: 0
     gie-unique-id: 1
     output-blob-names: conv2d_bbox;conv2d_cov/Sigmoid
     #scaling-filter: 0
     #scaling-compute-hw: 0
     cluster-mode: 2
    
    class-attrs-all:
    pre-cluster-threshold: 0.2
    topk: 20
    nms-iou-threshold: 0.5
    
    # Use the yoloV8s model
    
    # property:
    #   gpu-id: 0
    #   net-scale-factor: 0.0039215697906911373
    #   model-color-format: 0
    #   onnx-file: yolov8s.onnx
    #   model-engine-file: ../yolov8s.onnx_b1_gpu0_fp32.engine
    #   #int8-calib-file: calib.table
    #   labelfile-path: ../labels.txt
    #   batch-size: 1
    #   network-mode: 0
    #   num-detected-classes: 80
    #   interval: 0
    #   gie-unique-id: 1
    #   process-mode: 1
    #   network-type: 0
    #   cluster-mode: 2
    #   maintain-aspect-ratio: 1
    #   symmetric-padding: 1
    #   #force-implicit-batch-dim: 1
    #   #workspace-size: 1000
    #   parse-bbox-func-name: NvDsInferParseYolo
    #   #parse-bbox-func-name: NvDsInferParseYoloCuda
    #   custom-lib-path: ../libnvdsinfer_custom_impl_Yolo.so
    #   engine-create-func-name: NvDsInferYoloCudaEngineGet
    
    # class-attrs-all:
    #   nms-iou-threshold: 0.45
    #   pre-cluster-threshold: 0.25
    #   topk: 300

Run the flow

  1. When the program runs the official model in dstest2_pgie_config.yml, the device will not break the point during the running process, and when the model of yoloV8s is run, the device will automatically break the point when inference and push for a period of time

Operating environment

Distribution: ubuntu 20.04 focal
Python : 3.8.10
CUDA:11.4.315
cuDNN :8.6.0.166
TensorRT :8.5.2.2
VPI:2.2.7
OpenCV:4.5.4
Model:NVIDIA Orin NX Developer Kit
Module:NVIDIA Jetson Oriin NX(16GB ram)
Jetpack:5.1.1
DeepStream: 6.2
marcoslucianops commented 11 months ago

Can you send the output from the terminal?

Today-fine commented 11 months ago

Hello, the device is shut down directly during the running of the program, and no inspection can be done

marcoslucianops commented 11 months ago

Probably, it's a problem with your power supply.

Today-fine commented 11 months ago

After investigation, it is indeed a power supply problem.