I'm trying to run DeepStream-Yolo in the Jetson Orin AGX to make real-time inference with the YoloV5 model. However, my camera (Alvium 1800 U-811c) isn't detected as V4L2 (I have all drivers correctly installed, but it seems that this camera is not compatible with V4L2). I created an RTSP server and gave it as the source of the Deepstream-app pipeline, but it is working at very low fps and low quality (pixelated).
I also made a python script which reads every frame of the camera with OpenCV, and I wonder if there is a way to build a pipeline that provides the OpenCV image as input to run DeepStream-Yolo in real-time. Could you provide any advise on doing that?
Hello,
I'm trying to run DeepStream-Yolo in the Jetson Orin AGX to make real-time inference with the YoloV5 model. However, my camera (Alvium 1800 U-811c) isn't detected as V4L2 (I have all drivers correctly installed, but it seems that this camera is not compatible with V4L2). I created an RTSP server and gave it as the source of the Deepstream-app pipeline, but it is working at very low fps and low quality (pixelated).
I also made a python script which reads every frame of the camera with OpenCV, and I wonder if there is a way to build a pipeline that provides the OpenCV image as input to run DeepStream-Yolo in real-time. Could you provide any advise on doing that?
Thank you in advance.