dusty-nv / ros_deep_learning

Deep learning inference nodes for ROS / ROS2 with support for NVIDIA Jetson and TensorRT
883 stars 258 forks source link

gst-launch-1.0 preview is just a black video - no error messages #139

Open andanders opened 2 months ago

andanders commented 2 months ago

I am unable to use my camera inside the docker container. Testing with gst-launch-1.0 v4l2src ! videoconvert ! autovideosinki get a preview window with just a black screen. The same command runs just fine outside the container. So I know the camera works with v4l2src.

To start the container I run

$ docker/run.sh
ARCH:  aarch64
reading L4T version from /etc/nv_tegra_release
L4T BSP Version:  L4T R35.4.1
localuser:root being added to access control list
CONTAINER_IMAGE:  dustynv/jetson-inference:r35.4.1
DATA_VOLUME:      --volume /home/nvidia/jetson-ros-inference/data:/jetson-inference/data --volume /home/nvidia/jetson-ros-inference/python/training/classification/data:/jetson-inference/python/training/classification/data --volume /home/nvidia/jetson-ros-inference/python/training/classification/models:/jetson-inference/python/training/classification/models --volume /home/nvidia/jetson-ros-inference/python/training/detection/ssd/data:/jetson-inference/python/training/detection/ssd/data --volume /home/nvidia/jetson-ros-inference/python/training/detection/ssd/models:/jetson-inference/python/training/detection/ssd/models --volume /home/nvidia/jetson-ros-inference/python/www/recognizer/data:/jetson-inference/python/www/recognizer/data 
V4L2_DEVICES:     --device /dev/video0 
DISPLAY_DEVICE:   -e DISPLAY=:0 -v /tmp/.X11-unix/:/tmp/.X11-unix 

root@nvidia-desktop:/jetson-inference# 

To start the video preview;

$ gst-launch-1.0 v4l2src ! videoconvert ! autovideosink
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
$ ^C
handling interrupt.
Interrupt: Stopping pipeline ...
Execution ended after 0:00:04.136207413
Setting pipeline to NULL ...
Freeing pipeline ...

System/software info

Linux nvidia-desktop 5.10.120-tegra aarch64 Docker version 27.0.3, build 7d4bcd8 Hardware is Syslogic Orin Nano

Repo pulled with git clone --recursive --depth=1 https://github.com/dusty-nv/jetson-inference I'm on the master branch, commit e8361ae7f5f3651c4ff46295b193291a93d52735.

v4l2-compliance output "Succeeded: 44, Failed: 1, Warnings: 0" Same result inside and outside container. ```bash v4l2-compliance SHA: not available, 64 bits Compliance test for tegra-video device /dev/video0: Driver Info: Driver name : tegra-video Card type : vi-output, ar0230 30-0043 Bus info : platform:tegra-capture-vi:1 Driver version : 5.10.120 Capabilities : 0x84200001 Video Capture Streaming Extended Pix Format Device Capabilities Device Caps : 0x04200001 Video Capture Streaming Extended Pix Format Media Driver Info: Driver name : tegra-camrtc-ca Model : NVIDIA Tegra Video Input Device Serial : Bus info : Media version : 5.10.120 Hardware revision: 0x00000003 (3) Driver version : 5.10.120 Interface Info: ID : 0x0300001d Type : V4L Video Entity Info: ID : 0x0000001b (27) Name : vi-output, ar0230 30-0043 Function : V4L2 I/O Pad 0x0100001c : 0: Sink Link 0x02000021: from remote pad 0x1000009 of entity '13e40000.host1x:nvcsi@15a00000-': Data, Enabled Required ioctls: test MC information (see 'Media Driver Info' above): OK test VIDIOC_QUERYCAP: OK Allow for multiple opens: test second /dev/video0 open: OK test VIDIOC_QUERYCAP: OK test VIDIOC_G/S_PRIORITY: OK test for unlimited opens: OK Debug ioctls: test VIDIOC_DBG_G/S_REGISTER: OK (Not Supported) test VIDIOC_LOG_STATUS: OK Input ioctls: test VIDIOC_G/S_TUNER/ENUM_FREQ_BANDS: OK (Not Supported) test VIDIOC_G/S_FREQUENCY: OK (Not Supported) test VIDIOC_S_HW_FREQ_SEEK: OK (Not Supported) test VIDIOC_ENUMAUDIO: OK (Not Supported) test VIDIOC_G/S/ENUMINPUT: OK test VIDIOC_G/S_AUDIO: OK (Not Supported) Inputs: 1 Audio Inputs: 0 Tuners: 0 Output ioctls: test VIDIOC_G/S_MODULATOR: OK (Not Supported) test VIDIOC_G/S_FREQUENCY: OK (Not Supported) test VIDIOC_ENUMAUDOUT: OK (Not Supported) test VIDIOC_G/S/ENUMOUTPUT: OK (Not Supported) test VIDIOC_G/S_AUDOUT: OK (Not Supported) Outputs: 0 Audio Outputs: 0 Modulators: 0 Input/Output configuration ioctls: test VIDIOC_ENUM/G/S/QUERY_STD: OK (Not Supported) test VIDIOC_ENUM/G/S/QUERY_DV_TIMINGS: OK (Not Supported) test VIDIOC_DV_TIMINGS_CAP: OK (Not Supported) test VIDIOC_G/S_EDID: OK (Not Supported) Control ioctls (Input 0): test VIDIOC_QUERY_EXT_CTRL/QUERYMENU: OK test VIDIOC_QUERYCTRL: OK test VIDIOC_G/S_CTRL: OK test VIDIOC_G/S/TRY_EXT_CTRLS: OK test VIDIOC_(UN)SUBSCRIBE_EVENT/DQEVENT: OK test VIDIOC_G/S_JPEGCOMP: OK (Not Supported) Standard Controls: 25 Private Controls: 12 Format ioctls (Input 0): test VIDIOC_ENUM_FMT/FRAMESIZES/FRAMEINTERVALS: OK test VIDIOC_G/S_PARM: OK test VIDIOC_G_FBUF: OK (Not Supported) test VIDIOC_G_FMT: OK test VIDIOC_TRY_FMT: OK test VIDIOC_S_FMT: OK test VIDIOC_G_SLICED_VBI_CAP: OK (Not Supported) test Cropping: OK (Not Supported) test Composing: OK (Not Supported) test Scaling: OK (Not Supported) Codec ioctls (Input 0): test VIDIOC_(TRY_)ENCODER_CMD: OK (Not Supported) test VIDIOC_G_ENC_INDEX: OK (Not Supported) test VIDIOC_(TRY_)DECODER_CMD: OK (Not Supported) Buffer ioctls (Input 0): fail: v4l2-test-buffers.cpp(715): q.create_bufs(node, 1, &fmt) != EINVAL test VIDIOC_REQBUFS/CREATE_BUFS/QUERYBUF: FAIL test VIDIOC_EXPBUF: OK test Requests: OK (Not Supported) Total for tegra-video device /dev/video0: 45, Succeeded: 44, Failed: 1, Warnings: 0 ```

Addendum

My actual purpose with this issue is to fix roslaunch ros_deep_learning video_viewer.ros1.launch, but I figured it might be related, and gst-launch-1.0 is easier to test with.

andanders commented 2 months ago

Another test, maybe it is relevant. test-video.py fails with the same timeout message.

/jetson-inference/build/aarch64/bin# python3 test-video.py

I want to hightlight from the below output:

failed to find/open file /proc/device-tree/model

[gstreamer] gstEncoder -- hardware encoder not detected, reverting to CPU encoder

[gstreamer] initialized gstreamer, version 1.16.3.0
[gstreamer] gstCamera -- attempting to create device v4l2:///dev/video0
[gstreamer] gstCamera -- found v4l2 device: vi-output, ar0230 30-0043
[gstreamer] v4l2-proplist, device.path=(string)/dev/video0, udev-probed=(boolean)false, device.api=(string)v4l2, v4l2.device.driver=(string)tegra-video, v4l2.device.card=(string)"vi-output\,\ ar0230\ 30-0043", v4l2.device.bus_info=(string)platform:tegra-capture-vi:1, v4l2.device.version=(uint)330360, v4l2.device.capabilities=(uint)2216689665, v4l2.device.device_caps=(uint)69206017;
[gstreamer] gstCamera -- found 10 caps for v4l2 device /dev/video0
[gstreamer] [0] video/x-raw, format=(string)UYVY, width=(int)1920, height=(int)1080, framerate=(fraction)30/1;
[gstreamer] [1] video/x-raw, format=(string)UYVY, width=(int)1280, height=(int)960, framerate=(fraction)34/1;
[gstreamer] [2] video/x-raw, format=(string)UYVY, width=(int)1280, height=(int)720, framerate=(fraction)45/1;
[gstreamer] [3] video/x-raw, format=(string)UYVY, width=(int)960, height=(int)540, framerate=(fraction){ 58/1, 30/1 };
[gstreamer] [4] video/x-raw, format=(string)UYVY, width=(int)640, height=(int)480, framerate=(fraction){ 60/1, 45/1 };
[gstreamer] [5] video/x-raw, format=(string)NV16, width=(int)1920, height=(int)1080, framerate=(fraction)30/1;
[gstreamer] [6] video/x-raw, format=(string)NV16, width=(int)1280, height=(int)960, framerate=(fraction)34/1;
[gstreamer] [7] video/x-raw, format=(string)NV16, width=(int)1280, height=(int)720, framerate=(fraction)45/1;
[gstreamer] [8] video/x-raw, format=(string)NV16, width=(int)960, height=(int)540, framerate=(fraction){ 58/1, 30/1 };
[gstreamer] [9] video/x-raw, format=(string)NV16, width=(int)640, height=(int)480, framerate=(fraction){ 60/1, 45/1 };
[gstreamer] gstCamera -- selected device profile:  codec=raw format=uyvy width=1920 height=1080 framerate=30
[gstreamer] gstCamera pipeline string:
[gstreamer] v4l2src device=/dev/video0 do-timestamp=true ! video/x-raw, format=(string)UYVY, width=(int)1920, height=(int)1080, framerate=30/1 ! appsink name=mysink sync=false
[gstreamer] gstCamera successfully created device v4l2:///dev/video0
[video]  created gstCamera from v4l2:///dev/video0
------------------------------------------------
gstCamera video options:
------------------------------------------------
  -- URI: v4l2:///dev/video0
     - protocol:  v4l2
     - location:  /dev/video0
  -- deviceType: v4l2
  -- ioType:     input
  -- codec:      raw
  -- codecType:  cpu
  -- width:      1920
  -- height:     1080
  -- frameRate:  30
  -- numBuffers: 4
  -- zeroCopy:   true
  -- flipMethod: none
------------------------------------------------
failed to find/open file /proc/device-tree/model
[gstreamer] gstEncoder -- detected board 'NVIDIA Orin Nano/NX with Syslogic BRMA3N-11x Carrier'
[gstreamer] gstEncoder -- hardware encoder not detected, reverting to CPU encoder
[gstreamer] gstEncoder -- pipeline launch string:
[gstreamer] appsrc name=mysource is-live=true do-timestamp=true format=3 ! x264enc name=encoder bitrate=2500 speed-preset=ultrafast tune=zerolatency ! video/x-h264 ! h264parse ! qtmux ! filesink location=images/test/test_video.mp4 
[video]  created gstEncoder from file:///jetson-inference/build/aarch64/bin/images/test/test_video.mp4
------------------------------------------------
gstEncoder video options:
------------------------------------------------
  -- URI: file:///jetson-inference/build/aarch64/bin/images/test/test_video.mp4
     - protocol:  file
     - location:  images/test/test_video.mp4
     - extension: mp4
  -- deviceType: file
  -- ioType:     output
  -- codec:      H264
  -- codecType:  cpu
  -- frameRate:  30
  -- bitRate:    2500000
  -- numBuffers: 4
  -- zeroCopy:   true
------------------------------------------------
[OpenGL] glDisplay -- X screen 0 resolution:  2560x1440
[OpenGL] glDisplay -- X window resolution:    2560x1440
[OpenGL] glDisplay -- display device initialized (2560x1440)
[video]  created glDisplay from display://0
------------------------------------------------
glDisplay video options:
------------------------------------------------
  -- URI: display://0
     - protocol:  display
     - location:  0
  -- deviceType: display
  -- ioType:     output
  -- width:      2560
  -- height:     1440
  -- frameRate:  0
  -- numBuffers: 4
  -- zeroCopy:   true
------------------------------------------------
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> v4l2src0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> v4l2src0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> v4l2src0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer message stream-start ==> pipeline0
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
dusty-nv commented 2 months ago

@andanders are you able to see the video stream with nvgstcapture-1.0 tool? Normally I try this first with MIPI CSI camera. This model has some custom formats in YUYV/NV16 format that I have not tried before. Also with video-viewer you want to use csi://0 for MIPI CSI camera (nvarguscamerasrc in GStreamer pipeline or if using gst-launch), not V4L2 /dev/video0 - the V4L2 driver does not use ISP.

andanders commented 1 month ago

For the sake of clarity; the camera and computer combo that I'm using is e-con sturdecam 20, and Syslogic RS A4NA. They are connected with GMSL and a built-in deserializer. So I don't know if it is strictly MIPI or CSI.

system detects the camera as ... ```bash ~$ sudo v4l2-ctl --list-devices NVIDIA Tegra Video Input Device (platform:tegra-camrtc-ca): /dev/media0 vi-output, ar0230 30-0043 (platform:tegra-capture-vi:1): /dev/video0 ```

Anyhow.

nvgstcapture-1.0 Directly on host machine

The following command opens a black window (no video), and a "no cameras available"

~$ nvgstcapture-1.0
Encoder null, cannot set bitrate!
Encoder Profile = High
Codec not supported. Falling back to opensrc H264 encoder
Supported resolutions in case of ARGUS Camera
  (2) : 640x480
  (3) : 1280x720
  (4) : 1920x1080
  (5) : 2104x1560
  (6) : 2592x1944
  (7) : 2616x1472
  (8) : 3840x2160
  (9) : 3896x2192
  (10): 4208x3120
  (11): 5632x3168
  (12): 5632x4224

Runtime ARGUS Camera Commands:

  Help : 'h'
  Quit : 'q'
  Set Capture Mode:
      mo:<val>
          (1): image
          (2): video
  Get Capture Mode:
      gmo
  Set sensor orientation:
      so:<val>
          (0): none
          (1): Rotate counter-clockwise 90 degrees
          (2): Rotate 180 degrees
          (3): Rotate clockwise 90 degrees
  Get sensor orientation:
      gso
  Set sensor mode:
      smo:<val> e.g., smo:1
  Get sensor mode:
      gsmo
  Set Whitebalance Mode:
      wb:<val>
          (0): off
          (1): auto
          (2): incandescent
          (3): fluorescent
          (4): warm-fluorescent
          (5): daylight
          (6): cloudy-daylight
          (7): twilight
          (8): shade
          (9): manual
  Get Whitebalance Mode:
      gwb
  Set Saturation (0 to 2):
      st:<val> e.g., st:1.25
  Get Saturation:
      gst
  Set Exposure Compensation (-2 to 2):
      ec:<val> e.g., ec:-2
  Get Exposure Compensation:
      gec
  Set Auto Whitebalance Lock:
      awbl:<val> e.g., awbl:0
  Get Auto Whitebalance Lock:
      awbl
  Set Auto Exposure Lock:
      ael:<val> e.g., ael:0
  Get Auto Exposure Lock:
      gael
  Set TNR Mode:
      tnrm:<val> e.g., tnrm:1
          (0): OFF
          (1): FAST
          (2): HIGH QUALITY
  Get TNR Mode:
      gtnrm
  Set TNR Strength (-1 to 1):
      tnrs:<val> e.g., tnrs:0.5
  Get TNR Strength:
      gtnrs
  Set EE Mode:
      eem:<val> e.g., eem:1
          (0): OFF
          (1): FAST
          (2): HIGH QUALITY
  Get EE Mode:
      geem
  Set EE Strength (-1 to 1):
      ees:<val> e.g., ees:0.5
  Get EE Strength:
      gees
  Set Auto Exposure Anti-Banding (0 to 3):
      aeab:<val> e.g., aeab:2
          (0): OFF
          (1): MODE AUTO
          (2): MODE 50HZ
          (3): MODE 60HZ
  Get Auto Exposure Anti-Banding:
      gaeab
  Set Gain Range:
      gr:<val><space><val> e.g., gr:1 16
  Get Gain Range:
      ggr
  Set Exposure Time Range:
      etr:<val><space><val> e.g., etr:34000 35000
  Get Exposure Time Range:
      getr
  Set ISP Digital Gain Range:
      dgr:<val><space><val> e.g., dgr:2 152
  Get ISP Digital Gain Range:
      gdgr
  Capture: enter 'j' OR
           followed by a timer (e.g., jx5000, capture after 5 seconds) OR
           followed by multishot count (e.g., j:6, capture 6 images)
           timer/multihot values are optional, capture defaults to single shot with timer=0s
  Start Recording : enter '1'
  Stop Recording  : enter '0'
  Video snapshot  : enter '2' (While recording video)
  Get Preview Resolution:
      gpcr
  Get Image Capture Resolution:
      gicr
  Get Video Capture Resolution:
      gvcr

Runtime encoder configuration options:

  Set Encoding Bit-rate(in bytes):
      br:<val> e.g., br:4000000
  Get Encoding Bit-rate(in bytes):
      gbr
  Set Encoding Profile(only for H.264):
      ep:<val> e.g., ep:1
          (0): Baseline
          (1): Main
          (2): High
  Get Encoding Profile(only for H.264):
      gep
  Force IDR Frame on video Encoder(only for H.264):
      Enter 'f' 

bitrate = 4000

Using winsys: x11 
** Message: 11:07:08.428: <main:4734> iterating capture loop ....
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:751 No cameras available

To my surprise nvgstcapture-1.0 is not included in the container - I'm trying to install it according to theseinstructions) but without success so far.

video-viewer Results for running ./video-viewer with csi://0 is a bit different. I get a "No cameras available" error, followed by the same "a timeout occurred"

full terminal readout Running in docker, using specific container to match OS which is L4T 35.4.1 ```bash ~$ docker/run.sh -c dustynv/jetson-inference:r35.4.1 ARCH: aarch64 reading L4T version from /etc/nv_tegra_release L4T BSP Version: L4T R35.4.1 [sudo] password for nvidia: localuser:root being added to access control list CONTAINER_IMAGE: dustynv/jetson-inference:r35.4.1 DATA_VOLUME: --volume /home/nvidia/jetson-inference/data:/jetson-inference/data --volume /home/nvidia/jetson-inference/python/training/classification/data:/jetson-inference/python/training/classification/data --volume /home/nvidia/jetson-inference/python/training/classification/models:/jetson-inference/python/training/classification/models --volume /home/nvidia/jetson-inference/python/training/detection/ssd/data:/jetson-inference/python/training/detection/ssd/data --volume /home/nvidia/jetson-inference/python/training/detection/ssd/models:/jetson-inference/python/training/detection/ssd/models --volume /home/nvidia/jetson-inference/python/www/recognizer/data:/jetson-inference/python/www/recognizer/data V4L2_DEVICES: --device /dev/video0 DISPLAY_DEVICE: -e DISPLAY=:0 -v /tmp/.X11-unix/:/tmp/.X11-unix root@nvidia-desktop:/jetson-inference# cd build/aarch64/bin root@nvidia-desktop:/jetson-inference/build/aarch64/bin# ./video-viewer [gstreamer] initialized gstreamer, version 1.16.3.0 [gstreamer] gstCamera -- attempting to create device csi://0 [gstreamer] gstCamera pipeline string: [gstreamer] nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, framerate=30/1, format=(string)NV12 ! nvvidconv flip-method=2 ! video/x-raw ! appsink name=mysink [gstreamer] gstCamera successfully created device csi://0 [video] created gstCamera from csi://0 ------------------------------------------------ gstCamera video options: ------------------------------------------------ -- URI: csi://0 - protocol: csi - location: 0 -- deviceType: csi -- ioType: input -- width: 1280 -- height: 720 -- frameRate: 30 -- numBuffers: 4 -- zeroCopy: true -- flipMethod: rotate-180 ------------------------------------------------ [OpenGL] glDisplay -- X screen 0 resolution: 2560x1440 [OpenGL] glDisplay -- X window resolution: 2560x1440 [OpenGL] glDisplay -- display device initialized (2560x1440) [video] created glDisplay from display://0 ------------------------------------------------ glDisplay video options: ------------------------------------------------ -- URI: display://0 - protocol: display - location: 0 -- deviceType: display -- ioType: output -- width: 2560 -- height: 1440 -- frameRate: 0 -- numBuffers: 4 -- zeroCopy: true ------------------------------------------------ [gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING [gstreamer] gstreamer changed state from NULL to READY ==> mysink [gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1 [gstreamer] gstreamer changed state from NULL to READY ==> nvvconv0 [gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0 [gstreamer] gstreamer changed state from NULL to READY ==> nvarguscamerasrc0 [gstreamer] gstreamer changed state from NULL to READY ==> pipeline0 [gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1 [gstreamer] gstreamer changed state from READY to PAUSED ==> nvvconv0 [gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0 [gstreamer] gstreamer stream status CREATE ==> src [gstreamer] gstreamer changed state from READY to PAUSED ==> nvarguscamerasrc0 [gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0 [gstreamer] gstreamer message new-clock ==> pipeline0 [gstreamer] gstreamer stream status ENTER ==> src [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvvconv0 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvarguscamerasrc0 [gstreamer] gstreamer message stream-start ==> pipeline0 Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:751 No cameras available [gstreamer] gstCamera -- end of stream (EOS) [gstreamer] gstreamer changed state from READY to PAUSED ==> mysink [gstreamer] gstreamer message async-done ==> pipeline0 [gstreamer] gstreamer message warning ==> mysink [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline0 [gstreamer] gstreamer pipeline0 recieved EOS signal... [gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer [gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer [gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer [gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer ```
andanders commented 1 month ago

I almost forgot to test with nvarguscamerasrc

~$ gst-launch-1.0 nvarguscamerasrc ! videoconvert ! autovideosink
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
ERROR: from element /GstPipeline:pipeline0/GstNvArgusCameraSrc:nvarguscamerasrc0: Internal data stream error.
Additional debug info:
gstbasesrc.c(3072): gst_base_src_loop (): /GstPipeline:pipeline0/GstNvArgusCameraSrc:nvarguscamerasrc0:
streaming stopped, reason not-negotiated (-4)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...

(gst-launch-1.0:6516): GLib-CRITICAL **: 14:43:07.565: g_thread_join: assertion 'thread' failed

(gst-launch-1.0:6516): GLib-CRITICAL **: 14:43:07.565: g_thread_join: assertion 'thread' failed

(gst-launch-1.0:6516): GLib-CRITICAL **: 14:43:07.565: g_queue_is_empty: assertion 'queue != NULL' failed

(gst-launch-1.0:6516): GLib-CRITICAL **: 14:43:07.565: g_queue_free: assertion 'queue != NULL' failed

(gst-launch-1.0:6516): GLib-CRITICAL **: 14:43:07.565: g_queue_free: assertion 'queue != NULL' failed
Freeing pipeline ...
dusty-nv commented 1 month ago

@andanders if nvarguscamerasrc and nvgstcapture aren't working, then I would consult with your camera manufacturer to make sure you have the right drivers or version of JetPack installed to support it.

andanders commented 3 weeks ago

If I modify docker/run.sh by adding --privileged, I can run /build/aarch64/bin/video-viewer /dev/video0 and get a preview inside the container.

E-con systems have responded:

Our Camera STURDeCAM20 has an on-board ISP and does not use NVIDIA ISP. The cameras which use NVIDIA ISP can only use nvgstcapture and nvarguscamerasrc.

Though our cameras does not use NVIDIA ISP, it can be streamed through NVIDIA ISP by using nvv4l2camerasrc. To stream STURDeCAM20 via NVIDIA ISP use the below command:

$ gst-launch-1.0 nvv4l2camerasrc device=/dev/video<video node> ! 'video/x-raw(memory:NVMM), format=(string)UYVY, width=(int)<width>, height=(int)<height>' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)I420, width=(int)<width>, height=(int)<height>' ! nv3dsink sync=false -v

I'm not sure what I'm supposed to do with that information. If running the container as privileged works, then thats my lot.

But this just gives me other problems down the road, because the ros_deep_learning examples fail.