IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.6k stars 4.83k forks source link

Problem starting stream: YUYV format resolve error #12519

Closed brandon-gamble closed 10 months ago

brandon-gamble commented 11 months ago

Required Info
Camera Model D435i
Firmware Version 5.14.0
Operating System & Version Ubuntu 22.04.3 LTS
Kernel Version (Linux Only) 5.15.0-1043-raspi
Platform Raspberry Pi
SDK Version { 2.53.1 }
Language Python3
Segment Robot

Issue Description

I am using OpenCV to detect an object in the color frame, then use that same pixel location in the depth frame to find the distance to the detected object.

The four lines of code found across all example code is as follows:

pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

pipeline.start(config)

This code worked for me robustly when I first started using it. The pipeline would start within 3 seconds every time. Both color and depth images were able to be viewed and manipulated.

Then, four days later, the code stopped working. Initially, I could still start a depth stream but no color stream. Then the depth stream started taking up to 10 minutes to start, sometimes timing out. I had made no changes to anything.

I tried using different stream formats for the color channel, such as rgb8, y8, y16. At first, rgb8 worked perfectly, while gbr8 continued to fail to start. However, after running my code a few times, rgb8 stopped working as well.

The error message when attempting to start the color stream is:

Traceback (most recent call last):
  File "/home/brandon/pi_hive/HIVE/HIVE/vision/vision.py", line 187, in <module>
    main()
  File "/home/brandon/pi_hive/HIVE/HIVE/vision/vision.py", line 165, in main
    pipeline.start(config)
RuntimeError:
Failed to resolve the request:
    Format: RGBA8, width: 640, height: 480

Into:
    Formats:
      YUYV

Since the error seems to be changing from RGB to YUYV I also tried using rs.format.yuyv to attempt to avoid the conversion. However, I was met with the same error message of failing to resolve to YUYV format.

I did not change any software or make any updates to kernel, firmware, or SDK between the time that the code worked robustly to now. The only change I have made is in the config.enable_stream line of code to experiment with different formats. It seemingly without reason has become very unstable, only working about 2% of the time I run the code.

MartyG-RealSense commented 11 months ago

Hi @brandon-gamble Does the program work if you remove 'config' from the brackets of the pipeline start line so that the program ignores the stream config lines and applies the camera's default stream configuration instead (848x480 depth and 1280x720 colour).

brandon-gamble commented 11 months ago

@MartyG-RealSense thank you for your quick resopnse. I tried changing pipeline.start(config) to pipeline.start() and it still gave the same error of failing to resolve request into YUYV format. The documentation also mentions that putting 0 in the enable stream command lets the camera try to choose a suitable value for image size and framerate. I tried each independently and then together:

Framerate as "any":

config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 0)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 0)

Image size as "any":

config.enable_stream(rs.stream.depth, 0, 0, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 0, 0, rs.format.bgr8, 30)

Frame rate and image size as "any":

config.enable_stream(rs.stream.depth, 0, 0, rs.format.z16, 0)
config.enable_stream(rs.stream.color, 0, 0, rs.format.bgr8, 0)

None of these worked either.

MartyG-RealSense commented 10 months ago

Are you able to access the realsense-viewer tool, please? If you are, does depth and colour streaming work correctly in it?

brandon-gamble commented 10 months ago

realsense-viewer (Linux)

I am able to open the viewer, with this loading message:

brandon@brandon-desktop:~$ realsense-viewer
 17/12 14:12:30,592 INFO [281473815416864] (synthetic-stream-gl.cpp:80) Initializing rendering, GLSL=0
 17/12 14:12:30,592 INFO [281473815416864] (synthetic-stream-gl.cpp:89)  0 GPU objects initialized
 17/12 14:12:30,863 INFO [281473815416864] (context.cpp:336) Found 1 RealSense devices (mask 0xff)
 17/12 14:12:30,936 INFO [281473815416864] (rs.cpp:2697) Framebuffer size changed to 1672 x 756
 17/12 14:12:30,938 INFO [281473815416864] (rs.cpp:2697) Window size changed to 1672 x 756
 17/12 14:12:30,938 INFO [281473815416864] (rs.cpp:2697) Scale Factor is now 1
 17/12 14:12:31,368 INFO [281473815416864] (context.cpp:336) Found 1 RealSense devices (mask 0xfe)

However, neither stream will open in the viewer.

Viewer: RGB stream (Linux)

The following error is displayed when the toggle on button is clicked for the RGB stream:

 17/12 14:11:06,292 INFO [281473661378592] (sensor.cpp:1594) Request: RGB8 Color, 
Resolved to: YUYV Color, 
 17/12 14:11:06,338 INFO [281473661378592] (uvc-streamer.cpp:28) endpoint 84 read buffer size: 1844224
 17/12 14:11:06,889 INFO [281472921170112] (metadata-parser.h:355) Frame counter reset
 17/12 14:11:06,889 INFO [281472921170112] (metadata-parser.h:355) Frame counter reset

This error is printed once and then the console waits. It seems to be the same YUYV issue I am seeing when running through Python.

Viewer: Depth Stream (Linux)

The following error is displayed when the toggle on button is clicked for the depth stream:

 17/12 14:12:39,918 INFO [281473815416864] (sensor.cpp:1594) Request: Z16 Depth, 
Resolved to: Z16 Depth, 
 17/12 14:12:40,036 INFO [281473815416864] (uvc-streamer.cpp:28) endpoint 82 read buffer size: 815104

This is printed once, followed by the following 4 lines, which print over and over continuously:

 17/12 14:12:40,304 WARNING [281473458041024] (messenger-libusb.cpp:42) control_transfer returned error, index: 768, error: Resource temporarily unavailable, number: 11
 17/12 14:12:40,327 WARNING [281473198059712] (sensor.cpp:406) Frame received with streaming inactive,Depth0, Arrived,0.000000 1702840360327.746582
 17/12 14:12:40,361 WARNING [281473198059712] (sensor.cpp:406) Frame received with streaming inactive,Depth0, Arrived,0.000000 1702840360361.153076
 17/12 14:12:40,393 WARNING [281473198059712] (sensor.cpp:406) Frame received with streaming inactive,Depth0, Arrived,0.000000 1702840360393.759521

It is surprising that the depth stream does not work in the native viewer since it does work using Python (see below).

Example code (Linux)

While the viewer does not work, the following example code does run.

## Found at: https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/python-tutorial-1-depth.py

## License: Apache 2.0. See LICENSE file in root directory.
## Copyright(c) 2015-2017 Intel Corporation. All Rights Reserved.

import pyrealsense2.pyrealsense2 as rs
import time

try:
    # Create a context object. This object owns the handles to all connected realsense devices
    print("initializing pipeline")
    pipeline = rs.pipeline()

    # Configure streams
    print("configuring stream")
    config = rs.config()
    config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)

    # Start streaming
    print("Starting stream...")
    tic = time.perf_counter()
    pipeline.start(config)
    print("...Stream started")
    toc = time.perf_counter()
    print(f"Stream start required {toc-tic:0.2f} sec")

    while True:
        # This call waits until a new coherent set of frames is available on a device
        # Calls to get_frame_data(...) and get_frame_timestamp(...) on a device will return stable values until wait_for_frames(...) is called
        frames = pipeline.wait_for_frames()
        depth = frames.get_depth_frame()
        if not depth: continue

        # Print a simple text-based representation of the image, by breaking it into 10x20 pixel regions and approximating the coverage of pixels within one meter
        coverage = [0]*64
        for y in range(480):
            for x in range(640):
                dist = depth.get_distance(x, y)
                if 0 < dist and dist < 1:
                    coverage[x//10] += 1

            if y%20 is 19:
                line = ""
                for c in coverage:
                    line += " .:nhBXWW"[c//25]
                coverage = [0]*64
                print(line)
    exit(0)
#except rs.error as e:
#    # Method calls agaisnt librealsense objects may throw exceptions of type pylibrs.error
#    print("pylibrs.error was thrown when calling %s(%s):\n", % (e.get_failed_function(), e.get_failed_args()))
#    print("    %s\n", e.what())
#    exit(1)

While it used to start nearly instantly, It now takes 15-30 seconds to get past the pipeline.start(config) command.

realsense-viewer (Windows)

I also have a Windows machine with the realsense viewer installed and the camera works perfectly.

Troubleshooting logic

Since the camera works on Windows, I know it is not a hardware issue of the camera being broken.

With the camera not working on my raspberry pi (most of the time) it may be a hardware issue with the pi itself, but I suspect it is rather an issue with software, firmware, or my build/install of the realsense library.

I am not sure of a good next step for isolating where the issue is.

MartyG-RealSense commented 10 months ago

Another Pi user with Ubuntu is currently experiencing the same warnings at the link below.

https://support.intelrealsense.com/hc/en-us/community/posts/24073974292243/comments/24232391919507

brandon-gamble commented 10 months ago

Thank you Marty. I have just read through that thread. I also read this thread, though it is older:

https://support.intelrealsense.com/hc/en-us/community/posts/360048495493--Intel-Realsense-D435-with-RaspberryPi-Best-practice-installation-guide

For my application it would be difficult to use any board other than a Pi and Python, but I can change software parameters.

Do you have a suggested combination of OS, kernel, and SDK that work well with the Pi and the Python wrapper?

MartyG-RealSense commented 10 months ago

Intel's Raspberry Pi installation instructions for Raspberry Pi OS (formerly known as Raspbian) can be used with the Python wrapper, though the instructions are outdated now and RealSense cameras will only work with the Buster version of Raspberry Pi OS and not the more modern 'Bullseye' version of this OS.

https://github.com/IntelRealSense/librealsense/blob/master/doc/installation_raspbian.md


Another approach is to install the Ubuntu MATE OS on Pi using the instructions in the link below.

https://dev.intelrealsense.com/docs/using-depth-camera-with-raspberry-pi-3

You could then install the Python wrapper from source afterwards, as described at https://github.com/IntelRealSense/librealsense/issues/4188


A third approach is to build librealsense from source code on a Pi with the Ubuntu OS using the libuvc_installation.sh build script at the link below and then build the Python wrapper from source afterwards.

https://github.com/IntelRealSense/librealsense/blob/master/doc/libuvc_installation.md

MartyG-RealSense commented 10 months ago

Hi @brandon-gamble Do you require further assistance with this case, please? Thanks!

MartyG-RealSense commented 10 months ago

Case closed due to no further comments received.

brandon-gamble commented 9 months ago

Hi @MartyG-RealSense, I was away for a month and was not able to respond.

I have tested the first installation method you suggested, using the Buster version of Raspberry Pi OS.

At first this seemed successful, as I can run the realsense-viewer command from terminal and successfully open both depth and rgb streams.

However, something did not work with the python wrapper installation resulting in a ModuleNotFoundError when trying to import pyrealsense2.

I looked at Issue #4188 as you noted above in install method 2, but it was not helpful to me. My build folder did not contain any .so files.

My directory /librealsense/build/wrappers/python contains:

MartyG-RealSense commented 9 months ago

Does pyrealsense2 import work if instead of import pyrealsense2 as rs, you use import pyrealsense2.pyrealsense2 as rs

brandon-gamble commented 9 months ago

No, both import commands give the same ModuleNotFound error.

MartyG-RealSense commented 9 months ago

The librealsense SDK is difficult to get working on Raspberry Pi boards, unfortunately.

A RealSense user of Raspberry Pi and pyrealsense2 on Buster at https://github.com/IntelRealSense/librealsense/issues/4375#issuecomment-509016982 who had the ModuleNotFound error resolved it by updating their bashrc file.