marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.39k stars 344 forks source link

How to put two keys to control the inference #419

Open jianyangshi opened 11 months ago

jianyangshi commented 11 months ago

@marcoslucianops hello! Thanks for your great work with deepstream!!!And i have a question to ask you: i wan to realize the function: i run the deepstream-app with usb camera and it can inference the usb camera, if i press the key o on keyboard, then close the inference function but the usb camera is also working,the screen also dispaly the frames.when i press p on keyboard,the inference function comes back just like usual.

thanks a lot!

marcoslucianops commented 10 months ago

You need to create a custom code for it based on deepstream_python_apps or the DeepStream C/C++ samples (/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps). You can use this repo as pgie in the nvinfer plugin.

jianyangshi commented 10 months ago

ok! Thanks!

jianyangshi commented 10 months ago

Which method do you think is easier to modify?

marcoslucianops commented 10 months ago

Depends on the programming language you are familiar. The Python is easier in the most of the cases than C/C++.

jianyangshi commented 10 months ago

I looked at the python code, the pipeline is composed of some modules, I do not know where to modify the keyboard keys

jianyangshi commented 10 months ago

I'm sorry to bother you again!!! What I want to ask is, where is primary-gie in the C++ code that connects to deepstream app config.txt? Where does enable=0 in primary-gie change? If I want to use button trigger instead of enable=0 or enable=1, where can I change it? Looking forward to your reply.

jianyangshi commented 10 months ago

When use YOLOV8,it comes a error: I can not get the V8 engine,how can i do? XXXX@ubuntu:/opt/nvidia/deepstream/deepstream/sources/DeepStream-Yolo$ sudo deepstream-app -c deepstream_app_config1.txt [sudo] password for sfhs: Sorry, try again. [sudo] password for sfhs:

Using winsys: x11 ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.0/sources/DeepStream-Yolo/model_b1_gpu0_fp16.engine open error 0:00:05.438599896 7867 0x7f4c0022d0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.0/sources/DeepStream-Yolo/model_b1_gpu0_fp16.engine failed 0:00:05.465314017 7867 0x7f4c0022d0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.0/sources/DeepStream-Yolo/model_b1_gpu0_fp16.engine failed, try rebuild 0:00:05.465456476 7867 0x7f4c0022d0 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() [UID = 1]: Trying to create engine from model files WARNING: [TRT]: onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. ERROR: [TRT]: ModelImporter.cpp:773: While parsing node number 233 [Range -> "/0/model.22/Range_output_0"]: ERROR: [TRT]: ModelImporter.cpp:774: --- Begin node --- ERROR: [TRT]: ModelImporter.cpp:775: input: "/0/model.22/Constant_4_output_0" input: "/0/model.22/Cast_output_0" input: "/0/model.22/Constant_5_output_0" output: "/0/model.22/Range_output_0" name: "/0/model.22/Range" op_type: "Range"

ERROR: [TRT]: ModelImporter.cpp:776: --- End node --- ERROR: [TRT]: ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:3352 In function importRange: [8] Assertion failed: inputs.at(0).isInt32() && "For range operator with dynamic inputs, this version of TensorRT only supports INT32!"

Could not parse the ONNX model

Failed to build CUDA engine ERROR: Failed to create network using custom network creation function ERROR: Failed to get cuda engine from custom library API 0:00:06.595983444 7867 0x7f4c0022d0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() [UID = 1]: build engine file failed 0:00:06.624681365 7867 0x7f4c0022d0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() [UID = 1]: build backend context failed 0:00:06.624860143 7867 0x7f4c0022d0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() [UID = 1]: generate backend failed, check config file settings 0:00:06.625059528 7867 0x7f4c0022d0 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Failed to create NvDsInferContext instance 0:00:06.625386876 7867 0x7f4c0022d0 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Config file path: /opt/nvidia/deepstream/deepstream-6.0/sources/DeepStream-Yolo/config_infer_primary_yoloV8.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED ** ERROR: : Failed to set pipeline to PAUSED Quitting ERROR from primary_gie: Failed to create NvDsInferContext instance Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie: Config file path: /opt/nvidia/deepstream/deepstream-6.0/sources/DeepStream-Yolo/config_infer_primary_yoloV8.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED App run failed

marcoslucianops commented 10 months ago

Export the ONNX model without using --dynamic.

jianyangshi commented 10 months ago

Nice ansower!

jianyangshi commented 10 months ago

i have modify the DeepStream C/C++ codes (/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps),then i use CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo in terminal,then can i get the new deepstream-app application program and do i need rename the application program called deepstream-app1?

marcoslucianops commented 10 months ago

The command CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo is to compile the lib from this repo. To compile the code (deepstream-test applications), you need to use CUDA_VER=10.2 make.