dusty-nv / jetson-inference

Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
https://developer.nvidia.com/embedded/twodaystoademo
MIT License
7.72k stars 2.97k forks source link

Can't capture a frame from an rtsp stream by jetson-inference on the platform x86_64 from official docker-image #1719

Open medphisiker opened 1 year ago

medphisiker commented 1 year ago

Hello,

thank you for great framework jetson-inference. I use jetson.utils to capture last frame from rtsp-stream. It works great on Jetson Nano. I try to use it on Linux AMD64 platform on remote server with Linux Ubuntu 20.04 (docker with working nvidia-docker ) CPU Intel and nvidia A10.

I followed the instructions on the docs(link).

I run commands:

$ git clone --recursive --depth=1 https://github.com/dusty-nv/jetson-inference
$ cd jetson-inference
$ docker/run.sh

The script has successfully identified my platform

ARCH:  x86_64

Docker pulls this container (link) and run it.

At first I decided to make sure that the rtsp stream is available inside the container and tried to capture a frame from it using OpenCV. I run that python script.

import cv2
import os
import datetime

if __name__ == '__main__':
    RTSP_URL = "rtsp://admin:pass@ip-address:554/Streaming/Channels/101"
    cap = cv2.VideoCapture(RTSP_URL)

    if not cap.isOpened():
        print("Cannot open RTSP stream")
        exit(-1)

    _, frame = cap.read()
    now = datetime.datetime.now()
    now = now.strftime("%Y-%m-%d_%H-%M-%S")
    cv2.imwrite(f"frame_{now}.jpg", frame)

    cap.release()

And I get a frame from rtsp-stream inside docker container.

Then I run other python script to capture frame from rtsp-stream by jetson.utils

import jetson.utils
import cv2

if __name__ == '__main__':
    rtsp_stream = "rtsp://admin:pass@ip-address:554/Streaming/Channels/101"

    # creating a jetson.inference object to capture frames from an rtsp stream
    camera = jetson.utils.videoSource(rtsp_stream)

    # returns us the frame immediately in the image format of jetson.utils.cudaImage
    cuda_img = camera.Capture()

    # convert the frame to numpy.array, the format that OpenCV works with
    frame = jetson.utils.cudaToNumpy(cuda_img)

    cv2.imwrite(f"frame.jpg", frame)

And I get

[gstreamer] initialized gstreamer, version 1.16.3.0
[gstreamer] gstDecoder -- creating decoder for admin:pass@ip-adress
[gstreamer] gstDecoder -- Could not open resource for reading and writing.
[gstreamer] gstDecoder -- try manually setting the codec with the --input-codec option
[gstreamer] gstDecoder -- failed to create decoder for rtsp://admin:pass@ip-address:554/Streaming/Channels/101
Traceback (most recent call last):
  File "/workspace/test_jetson_utils_frame_capture.py", line 9, in <module>
    camera = jetson.utils.videoSource(rtsp_stream)
Exception: jetson.utils -- failed to create videoSource device

It was not possible to capture the frame.

I try to run video-viewer

video-viewer rtsp://admin:pass@ip-address:554/Streaming/Channels/101

And I get the same output. Error description advises to set the video codec manually.

And I try this

video-viewer --input-codec=h264 rtsp://admin:pass@ip-address:554/Streaming/Channels/101

And I get output:

[gstreamer] initialized gstreamer, version 1.16.3.0
[gstreamer] gstDecoder -- creating decoder for admin:pass@ip-address
[gstreamer] gstDecoder -- Could not open resource for reading and writing.
[gstreamer] gstDecoder -- pipeline string:
[gstreamer] rtspsrc location=rtsp://admin:pass@ip-address:554/Streaming/Channels/101 latency=2000 ! queue ! rtph264depay ! h264parse ! avdec_h264 ! video/x-raw ! appsink name=mysink
[video]  created gstDecoder from rtsp://admin:pass@ip-address:554/Streaming/Channels/101
------------------------------------------------
gstDecoder video options:
------------------------------------------------
  -- URI: rtsp://admin:pass@ip-address:554/Streaming/Channels/101
     - protocol:  rtsp
     - location:  admin:pass@ip-address
     - port:      554
  -- deviceType: ip
  -- ioType:     input
  -- codec:      h264
  -- width:      0
  -- height:     0
  -- frameRate:  0.000000
  -- bitRate:    0
  -- numBuffers: 4
  -- zeroCopy:   true
  -- flipMethod: none
  -- loop:       0
  -- rtspLatency 2000
------------------------------------------------
[OpenGL] failed to open X11 server connection.
[OpenGL] failed to create X11 Window.
video-viewer:  failed to create output stream
[gstreamer] opening gstDecoder for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> avdec_h264-0
[gstreamer] gstreamer changed state from NULL to READY ==> h264parse0
[gstreamer] gstreamer changed state from NULL to READY ==> rtph264depay0
[gstreamer] gstreamer changed state from NULL to READY ==> queue0
[gstreamer] gstreamer changed state from NULL to READY ==> rtspsrc0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer changed state from READY to PAUSED ==> avdec_h264-0
[gstreamer] gstreamer changed state from READY to PAUSED ==> h264parse0
[gstreamer] gstreamer changed state from READY to PAUSED ==> rtph264depay0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> queue0
[gstreamer] gstreamer message progress ==> rtspsrc0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> rtspsrc0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> avdec_h264-0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> h264parse0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> rtph264depay0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> queue0
[gstreamer] gstreamer message progress ==> rtspsrc0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> rtspsrc0
[gstreamer] gstreamer message progress ==> rtspsrc0
[gstreamer] gstreamer rtspsrc0 ERROR Could not open resource for reading and writing.
[gstreamer] gstreamer Debugging info: gstrtspsrc.c(7893): gst_rtspsrc_retrieve_sdp (): /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0:
Failed to connect. (Generic error)
[gstreamer] gstreamer message progress ==> rtspsrc0
[gstreamer] gstreamer message progress ==> rtspsrc0
[gstreamer] gstDecoder -- failed to retrieve next image buffer
video-viewer:  failed to capture video frame
[gstreamer] gstDecoder -- failed to retrieve next image buffer
video-viewer:  failed to capture video frame
[gstreamer] gstDecoder -- failed to retrieve next image buffer
video-viewer:  failed to capture video frame

I googling this error and I find this (link).

I try to rebuild library by this commands:

cd /jetson-inference/build
cmake -DENABLE_NVMM=OFF ../
make
sudo make install

I have successfully rebuilt the library. I run video-viewer again

video-viewer --input-codec=h264 rtsp://admin:pass@ip-address:554/Streaming/Channels/101

And I get the same error as before.

I want to use your cool jetson.utils library to capture a frame. Please help me, what else can I try to do? =)

dusty-nv commented 1 year ago

@medphisiker considering that you probably already know that jetson-inference isn't officially supported on x86, and that you already have cv2.VideoCapture() working (which you could probably do on x86 with acceptable performance) - what I would do, is try to run a similar pipeline to below through gst-launch-1.0:

rtspsrc location=rtsp://admin:pass@ip-address:554/Streaming/Channels/101 latency=2000 ! queue ! rtph264depay ! h264parse ! avdec_h264 ! video/x-raw ! appsink name=mysink

That is the pipeline that jetson-inference was trying to run - adapt it for standalone (i.e. replace appsink/ect) and see if it works

medphisiker commented 1 year ago

rtspsrc location=rtsp://admin:pass@ip-address:554/Streaming/Channels/101 latency=2000 ! queue ! rtph264depay ! h264parse ! avdec_h264 ! video/x-raw ! appsink name=mysink

That is the pipeline that jetson-inference was trying to run - adapt it for standalone (i.e. replace appsink/ect) and see if it works

Yes, the main platform of the jetson-inference is nVidia Jetson. I really liked this jetson-utils and after seeing the docker for AMDx64 I immediately decided to try it)

I tried to run the pipeline you recommended for gst-launch-1.0:

gst-launch-1.0 rtspsrc location=rtsp://admin:pass@ip-address:554/Streaming/Channels/101 latency=2000 ! queue ! rtph264depay ! h264parse ! avdec_h264 ! video/x-raw ! appsink name=mysink

And I get this output:

Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Progress: (open) Opening Stream
Progress: (connect) Connecting to rtsp://admin:pass@ip-address:554/Streaming/Channels/101
ERROR: from element /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0: Could not open resource for reading and writing.
Additional debug info:
gstrtspsrc.c(7893): gst_rtspsrc_retrieve_sdp (): /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0:
Failed to connect. (Generic error)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...

For some reason, gst-launch-1.0 does not have access to this rtsp-stream.

I also tried to execute python code specifying the video codec of the rtsp stream.

from jetson_utils import cudaToNumpy, videoSource
import cv2

if __name__ == "__main__":
    rtsp_stream = "c"

    # creating a jetson.inference object to capture frames from an rtsp stream
    camera = videoSource(rtsp_stream, ["arguments", "--input-codec=h264"])

    for num in range(20):
        # returns us the frame immediately in the image format of jetson.utils.cudaImage
        cuda_img = camera.Capture()

        if cuda_img is None:
            print("Failed to retrive frame.")
        else:
            # convert the frame to numpy.array, the format that OpenCV works with
            frame = cudaToNumpy(cuda_img)

            cv2.imwrite(f"frame_{num}.jpg", frame)

and I got this output

[gstreamer] initialized gstreamer, version 1.16.3.0
[gstreamer] gstDecoder -- creating decoder for admin:pass@ip-address
[gstreamer] gstDecoder -- Could not open resource for reading and writing.
[gstreamer] gstDecoder -- pipeline string:
[gstreamer] rtspsrc location=rtsp://admin:pass@ip-address:554/Streaming/Channels/101 latency=2000 ! queue ! rtph264depay ! h264parse ! avdec_h264 ! video/x-raw ! appsink name=mysink
[video]  created gstDecoder from rtsp://admin:pass@ip-address:554/Streaming/Channels/101
------------------------------------------------
gstDecoder video options:
------------------------------------------------
  -- URI: rtsp://admin:pass@ip-address:554/Streaming/Channels/101
     - protocol:  rtsp
     - location:  admin:pass@ip-address
     - port:      554
  -- deviceType: ip
  -- ioType:     input
  -- codec:      h264
  -- width:      0
  -- height:     0
  -- frameRate:  0.000000
  -- bitRate:    0
  -- numBuffers: 4
  -- zeroCopy:   true
  -- flipMethod: none
  -- loop:       0
  -- rtspLatency 2000
------------------------------------------------
[gstreamer] opening gstDecoder for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> avdec_h264-0
[gstreamer] gstreamer changed state from NULL to READY ==> h264parse0
[gstreamer] gstreamer changed state from NULL to READY ==> rtph264depay0
[gstreamer] gstreamer changed state from NULL to READY ==> queue0
[gstreamer] gstreamer changed state from NULL to READY ==> rtspsrc0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer changed state from READY to PAUSED ==> avdec_h264-0
[gstreamer] gstreamer changed state from READY to PAUSED ==> h264parse0
[gstreamer] gstreamer changed state from READY to PAUSED ==> rtph264depay0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> queue0
[gstreamer] gstreamer message progress ==> rtspsrc0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> rtspsrc0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> avdec_h264-0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> h264parse0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> rtph264depay0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> queue0
[gstreamer] gstreamer message progress ==> rtspsrc0
[gstreamer] gstreamer message progress ==> rtspsrc0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> rtspsrc0
[gstreamer] gstreamer message progress ==> rtspsrc0
[gstreamer] gstreamer message progress ==> rtspsrc0
[gstreamer] gstreamer rtspsrc0 ERROR Could not open resource for reading and writing.
[gstreamer] gstreamer Debugging info: gstrtspsrc.c(7893): gst_rtspsrc_retrieve_sdp (): /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0:
Failed to connect. (Generic error)
[gstreamer] gstreamer message progress ==> rtspsrc0
[gstreamer] gstreamer message progress ==> rtspsrc0
[gstreamer] gstDecoder -- failed to retrieve next image buffer
Traceback (most recent call last):
  File "/jetson-inference/test.py", line 13, in <module>
    cuda_img = camera.Capture()
Exception: jetson.utils -- videoSource failed to capture image
[gstreamer] gstDecoder -- stopping pipeline, transitioning to GST_STATE_NULL
[gstreamer] gstDecoder -- pipeline stopped

It seems as if everything is working, but for some reason GStreamer does not have access to this rtsp-stream from the docker container. Thank you, for advice I will experiment further, if I figure out how to solve it, I will write here.