IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.53k stars 4.81k forks source link

No device connected error / config.resolve(pipeline_wrapper) #11368

Closed simon-TUM closed 1 year ago

simon-TUM commented 1 year ago
Required Info
Camera Model D435I
Firmware Version 05.14.00.00
Operating System & Version Ubuntu 20.04
Kernel Version (Linux Only) 5.9.1-rt20
Platform PC
SDK Version LibRealSense v2.50.0
Language python
Segment Robot

Issue Description

Hi, I'm having some troubles. The following lines give me an error when running:

pipeline = rs.pipeline()
config = rs.config()
rospy.init_node('listen_extrinsic_matrices', anonymous=True)
bridge = CvBridge()
extrinsic_matrix_htc = rospy.Subscriber('extrinsic_matrix_from_hand_to_camera', Image, queue_size=10)
extrinsic_matrix_lth = rospy.Subscriber('extrinsic_matrix_from_link0_to_hand', Image, queue_size=10)

# Get device product line for setting a supporting resolution
pipeline_wrapper = rs.pipeline_wrapper(pipeline)
pipeline_profile = config.resolve(pipeline_wrapper)
device = pipeline_profile.get_device()
align = rs.align(rs.stream.depth)
pc = rs.pointcloud()

The error message is:

pipeline_profile = config.resolve(pipeline_wrapper)
RuntimeError: No device connected

I'm not sure what what the issue is. Realsense-viewer works and I've tried different cables but without success so far.

MartyG-RealSense commented 1 year ago

Hi @simon-TUM The natural line to come after device = pipeline_profile.get_device() is profile = pipeline.start()

device = pipeline_profile.get_device()
profile = pipeline.start()

The align = rs.align(rs.stream.depth) and pc = rs.pointcloud() instructions cannot be used until after the pipeline has been started.

Would it be possible to post your entire pyrealsense2 script please? Thanks!

simon-TUM commented 1 year ago

Thank you for the fast reply! This is the code I'm working with:

from image_processing.utils.ransac_alg_for_images import *
import time
from skimage.io import imread, imshow, show
# Algorithm to train the images to a ransac regression model and predict the
# boundaries within the images
import rospy
# The main script for this task is this one (main.py). The task is conducted by the function
# ransac_alg_for_images. The function receives the path where all the images are located and
# the choice to draw the plots as its arguments. Basically, only running this script is enough.
# Other functionals will be added since the code is not finalized.
import pyrealsense2 as rs
import numpy as np
from skimage.util import img_as_ubyte, img_as_float
from sensor_msgs.msg import CameraInfo, Image
from cv_bridge import CvBridge
from pyrealsense2 import intrinsics, extrinsics
import imageio
import skimage.io
import cv2

def cameratoscript():
# Configure depth and color streams
    pipeline = rs.pipeline()
    config = rs.config()
    rospy.init_node('listen_extrinsic_matrices', anonymous=True)
    bridge = CvBridge()
    extrinsic_matrix_htc = rospy.Subscriber('extrinsic_matrix_from_hand_to_camera', Image, queue_size=10)
    extrinsic_matrix_lth = rospy.Subscriber('extrinsic_matrix_from_link0_to_hand', Image, queue_size=10)
    # Get device product line for setting a supporting resolution
    pipeline_wrapper = rs.pipeline_wrapper(pipeline)
    pipeline_profile = config.resolve(pipeline_wrapper)
    device = pipeline_profile.get_device()
    align = rs.align(rs.stream.depth)
    pc = rs.pointcloud()

    found_rgb = False
    for s in device.sensors:
        if s.get_info(rs.camera_info.name) == 'RGB Camera':
            found_rgb = True
            break
    if not found_rgb:
        print("The demo requires Depth camera with Color sensor")
        exit(0)

    config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
    config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
    # Start streaming
    pipe_profile = pipeline.start(config)

    try:
        while True:

            # Wait for a coherent pair of frames: depth and color

            frames = pipeline.wait_for_frames()
            color_frame = frames.get_color_frame()
            depth_frame = frames.get_depth_frame()

            # if not color_frame:
            #     continue
            color_image = np.asanyarray(color_frame.get_data())
            depth_image = np.asanyarray(depth_frame.get_data())
            # Convert images to numpy arrays
            depth_int = depth_frame.profile.as_video_stream_profile().intrinsics
            color_int = color_frame.profile.as_video_stream_profile().intrinsics
            depth_to_color_ext = depth_frame.profile.get_extrinsics_to(color_frame.profile)
            color_to_depth_ext = color_frame.profile.get_extrinsics_to(depth_frame.profile)
            depth_int_ppx = depth_frame.profile.as_video_stream_profile().get_intrinsics().ppx
            depth_int_ppy = depth_frame.profile.as_video_stream_profile().get_intrinsics().ppy
            depth_int_fx = depth_frame.profile.as_video_stream_profile().get_intrinsics().fx
            depth_int_fy = depth_frame.profile.as_video_stream_profile().get_intrinsics().fy
            intrinsic_mat = np.asarray([[depth_int_fx, 0, depth_int_ppx], [0, depth_int_fy, depth_int_ppy], [0, 0, 1]])
            #print("\n Intrinsic matrix: ", intrinsic_mat)

            # print("\n Depth intrinsics: " + str(depth_int))
            # print("\n Color intrinsics: " + str(color_int))
            # print("\n Depth to color extrinsics: " + str(depth_to_color_ext))
            #
            depth_sensor = pipe_profile.get_device().first_depth_sensor()
            depth_scale = depth_sensor.get_depth_scale()
            #print("\n\t depth_scale: " + str(depth_scale))
            #
            color_image_r = color_image[:, :, ::-1]

            #color_image_r = color_image
            sk_color_image = img_as_ubyte(color_image_r)
            #sk_color_image = color_image_r
            # # gray_image = cv2.cvtColor(color_image, cv2.COLOR_BGR2GRAY)
            # color_colormap_dim = sk_color_image.shape
            img, x_min, y_min, x_max, y_max, x, y = ransac_alg_for_images(sk_color_image, sobel_masking_type="v", adaptive_threshold_method="none",
                                         estimator_type="linear", draw_plots=False)
            print("\n\t Depth scale: ", depth_scale)
            print("\n\t Depth image: ", depth_image[x][y])
            print("\n\t Color image: ", color_image[x][y])
            depth_pixel = [x, y]  # Random pixel
            depth_value = depth_image[x][y] * depth_scale
            print("\n\t Old depth value: ", depth_value)
            if(depth_value > 0.45):
                depth_value = 0.45
            print("\n\t Depth value is reduced for safety to 45 cm")

            ## check the neighborhood if you dont get any depth value
            print("\n\t depth_pixel@" + str(depth_pixel) + " value: " + str(depth_value) + " meter")

            depth_point = rs.rs2_deproject_pixel_to_point(depth_int, depth_pixel, depth_value)
            print("Depth point: ", depth_point)
            print("\n\t 3D depth_point: " + str(depth_point))

            return img, intrinsic_mat, depth_value, depth_point

            # Show images
            #skimage.io.imshow(sk_color_image)
            # cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)
            # cv2.imshow('RealSense', color_image)
            # cv2.waitKey(1)

    finally:

        # Stop streaming
        pipeline.stop()

cameratoscript()
# print("Total elapsed time: ", main_end-main_start)
MartyG-RealSense commented 1 year ago

Does the script run successfully if you comment out def cameratoscript(): and cameratoscript() at the end and let the code run immediately instead of it only being run when the function name is called at the end of the script?

simon-TUM commented 1 year ago

No, this gives unfortunately the same error :/

MartyG-RealSense commented 1 year ago

If you test with a basic introductory pyrealsense2 script, does the error still occur?

# First import the library
import pyrealsense2 as rs

# Create a context object. This object owns the handles to all connected realsense devices
pipeline = rs.pipeline()
pipeline.start()

try:
    while True:
        # Create a pipeline object. This object configures the streaming camera and owns it's handle
        frames = pipeline.wait_for_frames()
        depth = frames.get_depth_frame()
        if not depth: continue

        # Print a simple text-based representation of the image, by breaking it into 10x20 pixel regions and approximating the coverage of pixels within one meter
        coverage = [0]*64
        for y in range(480):
            for x in range(640):
                dist = depth.get_distance(x, y)
                if 0 < dist and dist < 1:
                    coverage[x//10] += 1

            if y%20 is 19:
                line = ""
                for c in coverage:
                    line += " .:nhBXWW"[c//25]
                coverage = [0]*64
                print(line)

finally:
    pipeline.stop()
simon-TUM commented 1 year ago

Thank you once again for the fast reply! :)

This gives me the following, similar output (before running it, I had to replace 'is' with '==' at the modulo result check)

Traceback (most recent call last):
  File "/home/simon/catkin_ws/devel/lib/beginner_tutorials/testscript.py", line 15, in <module>
    exec(compile(fh.read(), python_script, 'exec'), context)
  File "/home/simon/catkin_ws/src/beginner_tutorials/scripts/testscript.py", line 6, in <module>
    pipeline.start()
RuntimeError: No device connected
MartyG-RealSense commented 1 year ago

As the basic introductory script has problems, this suggests to me that the problem may be somewhere in the pyrealsense2 Python wrapper installation on your computer rather than a problem with the scripting.

What method did you use to install the pyrealsense2 wrapper on your computer, please? And are you using Python 3 or Anaconda?

There are rare cases where the usual pyrealsense2 import instruction does not work and a different instruction is used instead where 'pyrealsense2' is written twice. Does the introductory test script still not work if you do this:

import pyrealsense2.pyrealsense2 as rs

simon-TUM commented 1 year ago

For the installation of pyrealsense2, I used the debian package: sudo apt-get install ros-$ROS_DISTRO-realsense2-camera I am using python3, there is no anaconda installation present.

The modification import pyrealsense2.pyrealsense2 as rs still gives me the error message.

Again, thank you for the kind reply!

MartyG-RealSense commented 1 year ago

Yes, I thought that ROS might be involved because of your script's rospy instruction. Thanks for the confirmation.

Does the error still occur if you unplug the camera, wait a couple of seconds and plug it back into the USB port before you run a script?

simon-TUM commented 1 year ago

After unplugging and plugging the camera back in, the script still gives the error message.

Before running any script with the camera, I used the command roslaunch realsense2_camera rs_camera.launch. After a restart of my PC and only executing the test script, there was no error. It seems that rs_camera.launch interferes at this point. I thought I need this node to access any data of the camera?

MartyG-RealSense commented 1 year ago

Although your script references rospy, it is essentially a pyrealsense2 script (pyrealsense2 is the RealSense SDK's Python compatibility wrapper) and so if the pyrealsense2 wrapper is installed on your computer then the script should be able to run on its own in the librealsense SDK as a Python script without the RealSense ROS wrapper needing to be launched with rs_camera.launch

simon-TUM commented 1 year ago

Thank you for the reply and all your support! I think this issue is solved.

MartyG-RealSense commented 1 year ago

You are very welcome, @simon-TUM - thanks very much for the update!

whitbrun commented 5 months ago

But if I have to publish the topics by roslaunch realsense2_camera rs_camera.launch while using a script to run pyrealsense just like @simon-TUM , how can I achieve?

MartyG-RealSense commented 5 months ago

Hi @whitbrun You can create a pyrealsense2 script that uses ROS like the example ROS 'node script' show_center_depth.py at the link below

https://github.com/IntelRealSense/realsense-ros/blob/ros1-legacy/realsense2_camera/scripts/show_center_depth.py

You can then open another terminal window, change the directory to where your script is stored and then run the script with a command such as python show_center_depth.py (or whatever your script's filename is).

ianshi8 commented 4 months ago

Hey @MartyG-RealSense I'm running into a similar issue here with the D345i on a Raspberry Pi 4 running Buster. I've installed everything from source via this link: https://github.com/datasith/Ai_Demos_RPi/wiki/Raspberry-Pi-4-and-Intel-RealSense-D435 and the camera is working when running realsense-viewer. rs-enumerate-devices also shows the camera connected, but I get the following error message when running the basic introductory pyrealsense2 script: Traceback (most recent call last): File "intro.py", line 6, in <module> pipeline.start() RuntimeError: No device connected

The error when running my actual script seems to be with this line: pipeline_profile = config.resolve(pipeline_wrapper)

Any help would be greatly appreciated! Thanks!

MartyG-RealSense commented 4 months ago

Hi @ianshi8 One RealSense Pi user who had the 'No device detected' error resolved it by updating their Pi's firmware with the command below.

sudo rpi-update

whitbrun commented 4 months ago

It is cool~ See if I will try it, thanks.发自我的手机-------- 原始邮件 --------发件人: MartyG-RealSense @.>日期: 2024年4月23日周二 晚上7:46收件人: IntelRealSense/librealsense @.>抄送: whitbrun @.>, Mention @.>主 题: Re: [IntelRealSense/librealsense] No device connected error / config.resolve(pipeline_wrapper) (Issue #11368) Hi @ianshi8 One RealSense Pi user who had the 'No device detected' error resolved it by updating their Pi's firmware with the command below. sudo rpi-update

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: @.***>

ianshi8 commented 4 months ago

Thanks for the response @MartyG-RealSense ! We just tried that but are still getting the same error. Do you know of any other troubleshooting steps? We've tried the stuff linked in this thread but they also haven't helped.

MartyG-RealSense commented 4 months ago

@ianshi8 Is the camera detected if you test the 3 simple lines of code below?

import pyrealsense2 as rs
pipe = rs.pipeline()
pipe.start()
ianshi8 commented 4 months ago

@MartyG-RealSense Unfortunately no, the camera still isn't detected when running just those 3 lines.

MartyG-RealSense commented 4 months ago

Next, please try inserting the code below before the pipeline_profile = config.resolve(pipeline_wrapper) line of your script in order to see if a reset of the camera hardware is able to be performed before the pipeline is started.

ctx = rs.context()
devices = ctx.query_devices()
for dev in devices:
dev.hardware_reset()
ianshi8 commented 4 months ago

@MartyG-RealSense I think the issue is before the pipeline_profile = config.resolve(pipeline_wrapper). This is the section of code where things are failing:

pipeline = rs.pipeline()
config = rs.config()
pipeline_wrapper = rs.pipeline_wrapper(pipeline) 
pipeline_profile = config.resolve(pipeline_wrapper)

Adding print statements has shown that the block fails at the pipeline_profile = config.resolve(pipeline_wrapper). I've tried adding the reset block you had above both before and after that line, but still getting the RuntimeError: No device connected message.

MartyG-RealSense commented 4 months ago

@ianshi8 Is using the Buster OS a requirement of your project or are you able to try installing librealsense and the pyrealsense2 wrapper on the Ubuntu OS on your Pi instead?

ianshi8 commented 4 months ago

@MartyG-RealSense Buster isn't a requirement for the project, but when we built using Bookworm we ran into segmentation faults. Per this thread, we switched to the Buster OS: https://support.intelrealsense.com/hc/en-us/community/posts/20506552618131-Segmentation-fault-when-trying-to-run-a-python-script-on-a-D455-realsense-camera-on-a-raspberry-pi-4

We've also tried building via Ubuntu Mate, but had issues with the OpenGL driver.

MartyG-RealSense commented 4 months ago

@ianshi8 Thank you for the information. Which librealsense SDK version and camera firmware driver version are you using, please?

ianshi8 commented 4 months ago

@MartyG-RealSense The camera firmware is currently version 5.15.1. The librealsense SDK version is 2.54.2.

MartyG-RealSense commented 4 months ago

RealSense users have recently been having problems with Raspian Bookworm and the Pi 5 board, so it does not seem as though updating the Raspbian version or changing the Pi board will resolve the camera detection issue. Pi in general is a difficult platform to get RealSense working correctly on, unfortunately. Sometimes the Viewer will work correctly but wrappers such as the ROS or Python wrapper do not.

Bearing in mind that you tried Ubuntu MATE but had problems with OpenGL installation, you might have better results with MATE if you install librealsense on it using the simple libuvc backend build script at the link below, as the libuvc installation method historically works well with Pi boards.

https://github.com/IntelRealSense/librealsense/blob/master/doc/libuvc_installation.md

Then try to install OpenGL using the instructions here:

https://ubuntu-mate.community/t/tutorial-activate-opengl-driver-for-ubuntu-mate-16-04/7094

ianshi8 commented 4 months ago

Thanks @MartyG-RealSense we'll give that a shot and update with results.

freetown113 commented 4 months ago

Hi @MartyG-RealSense I have the similar problem with RS l515. In fact the code works perfectly on my host machine, even on several different machines even without SDK, I installed just pyrealsense2 and several libs from the installation guide. But inside docker container for the same code I got an error

pipeline_profile = config.resolve(pipeline_wrapper)
RuntimeError: No device connected

the piece of code where it's happening is following:

self.pipeline = rs.pipeline()
config = rs.config()

# Get device product line for setting a supporting resolution
pipeline_wrapper = rs.pipeline_wrapper(self.pipeline)
pipeline_profile = config.resolve(pipeline_wrapper)                                  <--------- Error appears here
device = pipeline_profile.get_device()
device_product_line = str(device.get_info(rs.camera_info.product_line))
self.depth_scale = pipeline_profile.get_device().first_depth_sensor().get_depth_scale()

I've tried already several options:

I built librealsense in the container I created docker container from official image https://github.com/IntelRealSense/librealsense/blob/master/scripts/Docker/readme.md I followed several instructions given by you at other threads like rebuild librealsense with:

-DFORCE_LIBUVC=ON 
-DFORCE_RSUSB_BACKEND=ON

or

import pyrealsense2.pyrealsense2 as rs

I mapped usd into container as a device --device=/dev/bus/usb as well as mounted as volume -v /dev/bus/usb:/dev/bus/usb

Nothing helps there is still the same error. Do you have any suggestions where is a problem? The camera I use is Intel RealSense L515, firmware version: 01.05.08.01, SDK 2.51.1

MartyG-RealSense commented 4 months ago

Hi @freetown113 What is the docker run instruction that you are using, please? Have you tried using sudo admin permissions and including the --privileged flag? For example:

sudo docker run --privileged -it --rm \ -v /dev:/dev \

freetown113 commented 4 months ago

@MartyG-RealSense I created it as

sudo docker run --privileged=true --ipc=host -v /dev/bus/usb/:/dev/bus/usb/ -ti -p 5678:22 -v /home:/home realsense/image bash

I forgot to add that all rs-* binaries work inside container and this is strange. It seems like it's something with pyrealsense2 I tried to install pyrealsense2 version 2.51.1.4348 to be compatible with SDK, it didn't help

MartyG-RealSense commented 4 months ago

The Docker setup tutorial that you linked to was intended for use only with x86 architecture computers such as desktop and laptop PCs, and not Arm architecture (Raspberry Pi, Nvidia Jetson, etc). Which computer / computing device is the camera not being detected on, please?

When you run your pyrealsense2 program script, does it make a difference if you also launch it in sudo mode, such as sudo python3 test.py

freetown113 commented 4 months ago

No, sudo doesn't change anything. Anyhow, I work as root in the container. The camera is not detected inside the container, on the host(where I launch the container) it's detected all right. Do you know another image with realsense libs, that I can use to build the container?

MartyG-RealSense commented 4 months ago

Past cases of using RealSense with Docker and pyrealsense2 are rare, with cases about use with ROS being more common. There is a pyrealsense2 example at the link below though.

https://hub.docker.com/r/drunknmunky32/jetsonrealsense

freetown113 commented 4 months ago

Thank you for the information. How ROS changes the situation in this case? There is a problem in method from pyrealsense2, even with ROS I suppose still to get it, right?

MartyG-RealSense commented 4 months ago

If you do not need ROS then I would recommend staying with using Docker with pyrealsense2 if possible.

The link below provides a non-ROS Dockerfile for pyrealsense2 that you could study. It was designed for Nvidia Jetson boards but might provide some useful insights.

https://github.com/dusty-nv/jetson-containers/issues/281#issuecomment-1709385190

And another Jetson one:

https://www.lieuzhenghong.com/how_to_install_librealsense_on_the_jetson_nx/

Here is one that was designed for Raspberry Pi boards.

https://forums.balena.io/t/unable-to-install-librealsense-for-raspberry-pi4-ubuntu-xenial/281368

My knowledge of Docker is admittedly limited, so I apologize that I cannot be of more help with this particular subject.

freetown113 commented 4 months ago

@MartyG-RealSense thank you for the links, but nothing helps. I cannot understand why simple functionality works file, like wrappers/python/examples/python-tutorial-1-depth.py from official repo. However if I try to call

pipeline_wrapper = rs.pipeline_wrapper(pipeline) 

there is "No device connected" error.

MartyG-RealSense commented 4 months ago

The SDK example opencv_viewer_example.py uses pipeline_wrapper = rs.pipeline_wrapper(pipeline) - does that example work or does it have the same "No device connected" problem as your script, please?

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/opencv_viewer_example.py

freetown113 commented 4 months ago

No, all examples that contain

pipeline_wrapper = rs.pipeline_wrapper(pipeline)

fail with the same error "No device connected". Only python-tutorial-1-depth.py works fine because it doesn't contain it.

MartyG-RealSense commented 4 months ago

The feeling I have is that if you are using 'config' statements to define a custom stream resolution and FPS combination then setting a supporting resolution with pipeline_wrapper may be unnecessary. Note that python-tutorial-1-depth.py is able to use a config instruction without using pipeline_wrapper.

If you do need the script to be able to automatically apply a supported resolution / FPS then removing 'config' from the brackets of the pipe start instruction will cause whatever default stream configurations are detected to be supported by the camera to be applied automatically. This is because without 'config' in the brackets, any config lines in a script are ignored.

pipeline = rs.pipeline()
pipeline.start()
freetown113 commented 4 months ago

Let's look once again

class Streaming:
   def __init__(self):
       # I set pipeline and config
       self.pipeline = rs.pipeline()
       config = rs.config()

       # Set pipeline wrapper and get device info
       pipeline_wrapper = rs.pipeline_wrapper(self.pipeline)
       pipeline_profile = config.resolve(pipeline_wrapper)               <---- RuntimeError: No device connected 
       device = pipeline_profile.get_device()
       device_product_line = str(device.get_info(rs.camera_info.product_line))

        self.depth_scale = pipeline_profile.get_device().first_depth_sensor().get_depth_scale()
        self.depth_min =  0.11
        self.depth_max =  1.0

        # get intrinsics info, necessary for rs2_project_color_pixel_to_depth_pixel
        self.depth_intrin = pipeline_profile.get_stream(rs.stream.depth).as_video_stream_profile().get_intrinsics()
        self.color_intrin = pipeline_profile.get_stream(rs.stream.color).as_video_stream_profile().get_intrinsics()

        # get extrinsics info, necessary for rs2_project_color_pixel_to_depth_pixel
        self.depth_to_color_extrin = pipeline_profile.get_stream(rs.stream.depth).as_video_stream_profile().get_extrinsics_to(pipeline_profile.get_stream(rs.stream.color))
        self.color_to_depth_extrin = pipeline_profile.get_stream(rs.stream.color).as_video_stream_profile().get_extrinsics_to(pipeline_profile.get_stream(rs.stream.depth))

        found_rgb = False
        for s in device.sensors:
            if s.get_info(rs.camera_info.name) == 'RGB Camera':
                found_rgb = True
                break
        if not found_rgb:
            print("The demo requires Depth camera with Color sensor")
            exit(0)
        config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
        if device_product_line == 'L500':
            config.enable_stream(rs.stream.color, 960, 540, rs.format.bgr8, 30)
        else:
            config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

        # Start streaming
        profile = self.pipeline.start(config)   <--- config is here but code return an error largely before this line

        align_to = rs.stream.color
        self.align = rs.align(align_to)

Is everything looks right?

MartyG-RealSense commented 4 months ago

In your first posting at https://github.com/IntelRealSense/librealsense/issues/11368#issuecomment-2090437084 you said that your script worked outside of Docker but not inside of a Docker container. So it is not likely to be a problem that can be solved by analyzing the script, unfortunately.

Another RealSense user at https://github.com/IntelRealSense/librealsense/issues/9979 with an L515 also had the same camera detection problem as you when using pipeline_wrapper. At the end of their case they found that their L515's USB cable was the cause of the problem.

As your script supports both L515 and 400 Series cameras, have you been able to test your script in Docker with only a 400 Series camera to see if the problem still occurs? In your first posting you only mention having an L515.

If you have only an L515, does the problem still occur if you unplug the micro-sized connector in the base of the camera, turn it around the opposite way and re-insert it (USB-C cables are two-way insertion at the micro-sized end, and one particular insertion direction of the two available usually enables the L515 to perform better).

freetown113 commented 4 months ago

It seems to me he https://github.com/IntelRealSense/librealsense/issues/9979 had a different problem, cause he was able to detect that resolution he needed was not available. In my case I cannot even achieve the phase where can get the intrinsics.

Yes, occasionally I have RS D435 and yes I also had this idea several days ago. And as I have only one cable for both cameras I tested D435 with the same cable. With D435 I got several errors "Frame didn't arrive within 5000", "profile does not contain the requested stream", several times a segmentation fault. But, at least, it works with config.resolve(pipeline_wrapper). Does it tell you something?

I red somewhere your reply about "turn micro-sized connector around the opposite way and re-insert" some time ago, I tried it, it didn't help.

MartyG-RealSense commented 4 months ago

The 400 Series and L515 camera models have some differences in the micro-sized USB port on the camera hardware that can account for some differences in performance and USB behaviour.

As the L515 behaves correctly outside of the Docker container though, it seems probable that the problem might be in the Dockerfile or some other aspect of Docker configuration and not with the camera hardware.

ianshi8 commented 4 months ago

Thanks @MartyG-RealSense we'll give that a shot and update with results.

The backend installation with a Raspberry Pi 4 and Ubuntu Mate 22.04 worked, I no longer see any device not connected errors that we experienced before. However, I'm now dealing with RuntimeError: Frame didn't arrive within 5000

This only seems to be a problem with the color stream of the code. Editing the opencv_viewer_example.py code, the error only persists when I enable the color stream with config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

Commenting all of the color stream dependent lines and only using depth works. @MartyG-RealSense do you have any idea why this is happening?

MartyG-RealSense commented 4 months ago

@ianshi8 If the RGB color stream on your camera is apparently causing a problem with the arrival of new frames ceasing (hence the RuntimeError: Frame didn't arrive within 5000 error) then you could try using RGB from the left infrared sensor if your camera model supports it. This feature is not supported on D435 type cameras but is available on D415, D405, D415, D455, D455f, D456, D457.

config.enable_stream(rs.stream.infrared, 640, 480, rs.format.bgr8, 30)

ianshi8 commented 3 months ago

@MartyG-RealSense Unfortunately we only have the D435. Is there something else we could try?

MartyG-RealSense commented 3 months ago

You mention that you edited the opencv_viewer_example.py script. Does it work correctly if you do not edit it and run it in its original state?

You could try forcing the FPS of depth and color to be maintained at a constant rate by setting auto-exposure to True (1) and an RGB option called auto-exposure priority to False (0). https://github.com/IntelRealSense/librealsense/issues/11246 has a Python example for implementing this instruction.

depth_sensor.set_option(rs.option.enable_auto_exposure, 1)
color_sensor.set_option(rs.option.enable_auto_exposure, 1)
color_sensor.set_option(rs.option.auto_exposure_priority, 0)

If that does not make a difference then you could next try implementing a hardware reset of the camera when the script is launched.

ctx = rs.context()
devices = ctx.query_devices()
for dev in devices:
dev.hardware_reset()
ianshi8 commented 3 months ago

@MartyG-RealSense The original opencv_viewer_example.py does not work, it only outputs anything if I remove the lines that have to do with the color image. Unfortunately, the forcing FPS did not resolve the issue either.

Adding the hardware reset at the beginning of the script gives a "No device connected" error.