Open sperezs95 opened 11 months ago
Hi, I will have a Jetson Nano to test in 18/08. Is it possible for you to wait?
Hi, I will have a Jetson Nano to test in 18/08. Is it possible for you to wait?
@marcoslucianops Yes, it could wait, any help is welcome, do you have a jetson nano or jetson nano orin?
UPDATE: YoloV2 works in with Jetson Xavier NX Developer Kit - Jetpack 4.6 [L4T 32.6.1], inside docker container nvcr.io/nvidia/deepstream-l4t:6.0-triton.
The problem is the number of threads in the cuda kernel on Jetson Nano. I have the old Jetson Nano. I need to check the number for it and how to change based on the board.
Hello dear @marcoslucianops, first of all I would like to thank you for such an excellent job in this repository, it has been very helpful.
My environment is the following:
I'm working inside NVIDIA's deepstream docker container: nvcr.io/nvidia/deepstream-l4t:6.0.1-triton
I am working on a custom LPR, first of all, I followed the instructions you propose to deploy a YoLoV7 with deepstream and everything works correctly, that is, I have the RTSP video output with the detections and tracks ids in my app with deepstream with python bindings.
Now I am trying to add the part of a SGIE, I have a custom model in YoLoV2 darknet to detect the license plates of the vehicles, as a first test I am trying to use it as a PGIE (replacing the respective config of my yolov7); I am basing myself on your “config_infer_primary_yoloV2.txt”, I have indicated the paths to my .cfg and .weights files, I have changed the batch size and the path to the compiled “nvdsinfer_custom_impl_Yolo” plugin.
When launching my app.py I get the following error:
Additionally I have tried to get my PGIE working with the original YoLoV2 and I get the same error.
This is my configuration file:
I would really appreciate if you can help me with this problem since I need to make that YoloV2 model work.
greetings and stay tuned.