Open jianyangshi opened 11 months ago
You need to create a custom code for it based on deepstream_python_apps or the DeepStream C/C++ samples (/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps
). You can use this repo as pgie
in the nvinfer plugin.
ok! Thanks!
Which method do you think is easier to modify?
Depends on the programming language you are familiar. The Python is easier in the most of the cases than C/C++.
I looked at the python code, the pipeline is composed of some modules, I do not know where to modify the keyboard keys
I'm sorry to bother you again!!! What I want to ask is, where is primary-gie in the C++ code that connects to deepstream app config.txt? Where does enable=0 in primary-gie change? If I want to use button trigger instead of enable=0 or enable=1, where can I change it? Looking forward to your reply.
When use YOLOV8,it comes a error: I can not get the V8 engine,how can i do? XXXX@ubuntu:/opt/nvidia/deepstream/deepstream/sources/DeepStream-Yolo$ sudo deepstream-app -c deepstream_app_config1.txt [sudo] password for sfhs: Sorry, try again. [sudo] password for sfhs:
Using winsys: x11
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.0/sources/DeepStream-Yolo/model_b1_gpu0_fp16.engine open error
0:00:05.438599896 7867 0x7f4c0022d0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:
ERROR: [TRT]: ModelImporter.cpp:776: --- End node --- ERROR: [TRT]: ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:3352 In function importRange: [8] Assertion failed: inputs.at(0).isInt32() && "For range operator with dynamic inputs, this version of TensorRT only supports INT32!"
Could not parse the ONNX model
Failed to build CUDA engine
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:06.595983444 7867 0x7f4c0022d0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:
Export the ONNX model without using --dynamic
.
Nice ansower!
i have modify the DeepStream C/C++ codes (/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps),then i use CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo in terminal,then can i get the new deepstream-app application program and do i need rename the application program called deepstream-app1?
The command CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
is to compile the lib from this repo. To compile the code (deepstream-test applications), you need to use CUDA_VER=10.2 make
.
@marcoslucianops hello! Thanks for your great work with deepstream!!!And i have a question to ask you: i wan to realize the function: i run the deepstream-app with usb camera and it can inference the usb camera, if i press the key o on keyboard, then close the inference function but the usb camera is also working,the screen also dispaly the frames.when i press p on keyboard,the inference function comes back just like usual.
thanks a lot!