IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.59k stars 4.82k forks source link

RuntimeError: No device connected using Docker and Raspberry Pi Zero 2 #12811

Closed MatthewRajan13 closed 5 months ago

MatthewRajan13 commented 6 months ago

Required Info
Camera Model D435i
Firmware Version Open RealSense Viewer 2.54.2
Operating System & Version Debian 11 (bullseye)
Kernel Version (Linux Only) 11
Platform Docker/Raspberry Pi Zero 2
SDK Version 2.54.2
Language python
Segment Robot

Issue Description

Hello, We are currently trying to connect and run my IntelRealsense D435i on a Raspberry Pi Zero 2 using a docker container. Our container base is: https://hub.docker.com/r/nixone/pyrealsense2/tags so that we can use Pyrealsense for ARM. When running python scripts we are getting the following error:

root@echo:/firmware/app/Drone# python3 RealsenseServer.py Server started at http://localhost:9000 192.168.137.1 - - [29/Mar/2024 18:13:42] "GET / HTTP/1.1" 200 -

Exception occurred during processing of request from ('192.168.137.1', 54686) Traceback (most recent call last): File "/usr/lib/python3.10/socketserver.py", line 316, in _handle_request_noblock self.process_request(request, client_address) File "/usr/lib/python3.10/socketserver.py", line 347, in process_request self.finish_request(request, client_address) File "/usr/lib/python3.10/socketserver.py", line 360, in finish_request self.RequestHandlerClass(request, client_address, self) File "/usr/lib/python3.10/socketserver.py", line 747, in init self.handle() File "/usr/lib/python3.10/http/server.py", line 432, in handle self.handle_one_request() File "/usr/lib/python3.10/http/server.py", line 420, in handle_one_request method() File "/firmware/app/Drone/RealsenseServer.py", line 78, in do_GET for frame_bytes in capture_frames(): File "/firmware/app/Drone/RealsenseServer.py", line 30, in capture_frames pipeline.start(config) RuntimeError: No device connected

The code we are running is below: `import cv2 import numpy as np import threading import socketserver from http.server import BaseHTTPRequestHandler, HTTPServer import pyrealsense2 as rs

exit_event = threading.Event()

def combine_images(img1, img2): height = min(img1.shape[0], img2.shape[0]) img1 = cv2.resize(img1, (int(img1.shape[1] height / img1.shape[0]), height)) img2 = cv2.resize(img2, (int(img2.shape[1] height / img2.shape[0]), height))

combined = np.hstack((img1, img2))

return combined

def capture_frames(): pipeline = rs.pipeline() config = rs.config() config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30) config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)

config.enable_record_to_file("test.bag")

pipeline.start(config)

color_img = np.zeros((480, 640, 3), np.uint8)
depth_img = np.zeros((480, 640, 3), np.uint8)
color_frame = None
depth_frame = None

while True and not exit_event.is_set():
    frames = pipeline.wait_for_frames()

    for f in frames:
        if f.profile.stream_type() == rs.stream.color:
            color_frame = f.as_video_frame()
            color_img = np.asanyarray(color_frame.get_data())
        if f.profile.stream_type() == rs.stream.depth:
            depth_frame = f.as_video_frame()
            depth_img = np.asanyarray(depth_frame.get_data())
            depth_img = cv2.applyColorMap(
                cv2.convertScaleAbs(depth_img, alpha=0.08), cv2.COLORMAP_JET
            )

    if color_frame and depth_frame:
        frame1 = combine_images(depth_img, color_img)

        _, jpeg = cv2.imencode('.jpg', frame1)

        frame_bytes = jpeg.tobytes()

        yield frame_bytes

class StreamingHandler(BaseHTTPRequestHandler): def do_GET(self): if self.path == '/': self.send_response(200) self.send_header('Content-type', 'multipart/x-mixed-replace; boundary=--frame') self.end_headers() for frame_bytes in capture_frames(): self.wfile.write(b'--frame\r\n') self.send_header('Content-type', 'image/jpeg') self.send_header('Content-length', len(frame_bytes)) self.end_headers() self.wfile.write(frame_bytes) self.wfile.write(b'\r\n') else: self.send_error(404)`

MartyG-RealSense commented 6 months ago

Hi @MatthewRajan13 Does it make a difference if you launch Docker with sudo admin permissions with the command sudo docker run

MatthewRajan13 commented 6 months ago

Unfortunately there is no difference when running with sudo

MartyG-RealSense commented 6 months ago

Are you using the --privileged Docker command?

sudo docker run -it --privileged

MatthewRajan13 commented 6 months ago

Yes, our docker file is shown below `version: '3.3'

services: firmware: image: forsythcreations/echo:firmware.task_Realsense volumes:

Forsyth-Creations commented 6 months ago

I think this conversation probably links back to this:

https://github.com/IntelRealSense/librealsense/issues/10724

@MatthewRajan13 and I are working on trying to get this working together. We're using a USB hat as such with the Pi Zero 2:

https://www.amazon.com/UART-Onboard-Raspberry-Pi-XYGStudy/dp/B06Y5HYN5F

Most of the documentation I've stumbled upon seems to suggest USB ports like this only allow 500 mA, whereas the Realsense needs 700 mA. We are powering the hub/Pi Zero through the GPIO pins, so I had hoped power wouldn't be an issue as I'd assume they can pull from the same VIN.

We haven't yet tried compiling with the DFORCE_RSUSB_BACKEND rule, but we can try that next. Here's the weird thing: we ran lsusb both inside and outside the container. When doing so, the device did show up, so I would assume that means the udev rule is kicking in. But whenever we go to start the stream, it's unable to connect. Is there a better way to trace this issue?

MartyG-RealSense commented 6 months ago

My understanding is that 500 mA is the minimum amount of power consumed by a USB device, and that the power draw of a RealSense camera increases when streams are enabled. The more streams that are enabled, the higher the power draw.

If you have access to the RealSense Viewer tool then a way to test whether there is an issue with insufficient power is to set the Laser Power option under 'Stereo Module > Controls' to zero and see if the depth stream can be successfully enabled. If it can, increase the Laser Power in small increments. If the camera disconnects after a certain amount then this indicates an issue with insufficient power from the USB port to meet the camera's power draw requirements.

A Pi 4 user at https://github.com/IntelRealSense/librealsense/issues/8274#issuecomment-770838665 was able to use a PoE hat successfully with RealSense but only with the official 1 meter USB cable supplied with the camera and USB-C cables that he chose himself had problems. Are you using the official USB cable, please?

Forsyth-Creations commented 6 months ago

We aren't using the stock USB cable, as we are tight on space on the drone. Instead, we're using a right angle one.

As for the Realsense Viewer, we could give that a shot. Currently we're running the pi in headless mode. Does the viewer tool come with pyrealsense, or is this a separate install? I really hope we don't have to build the viewer for an ARM device and run it on the pi zero :)

Would knocking down the frame rate also decrease power draw/get us connected?

MartyG-RealSense commented 6 months ago

The pyrealsense2 wrapper has to be installed separately from the Viewer tool but the Viewer does not require pyrealsense2 in order to operate. I appreciate though why the Viewer would not be practical on a headless system.

If you instead run the text-based rs-hello-realsense example program, if you built the librealsense SDK with the examples included, then that should act as a test of how capable your Pi Zero is of streaming depth without graphics.

I believe that reducing FPS would lower the processing demands placed on the Pi's CPU but not reduce the power draw.

Whilst RealSense cameras can run on low-end computing devices because some of the processing is done on hardware inside the camera, a Pi Zero 2 is likely close to the minimum specification that the RealSense SDK will run on, especially in regard to memory capacity.

You could try defining a larger swapfile for your Pi to create 'virtual memory' from its storage space that may help to compensate for the 1 GB of real memory by giving the Pi a fallback when the real memory is used up.

https://pimylifeup.com/raspberry-pi-swap-file/

I note that the Pi Zero 2 uses a Micro USB port instead of a full size USB port. Small low-power computing devices with these ports can be susceptible to providing insufficient power for RealSense cameras. A solution is to use a mains electricity powered USB hub, but that would not be an option on a drone.

Forsyth-Creations commented 6 months ago

Gotcha. We are going to try this out with the Raspberry Pi 4. So far, here are the results with that model:

Build pyrealsense2 with the following Dockerfile:

FROM ubuntu:22.04

# Set the working directory
WORKDIR /

RUN apt-get update && apt-get install ffmpeg libsm6 libxext6  -y

# PyRealsense deps

RUN apt-get update && apt-get install python3-gst-1.0 gir1.2-gst-rtsp-server-1.0 gstreamer1.0-plugins-base gstreamer1.0-plugins-ugly libx264-dev python3-opencv -y

# RUN pip install poetry==1.7.1

RUN apt-get -y update && \
  apt-get -y install python3-dev python3-distutils python3-pip libssl-dev libxinerama-dev libsdl2-dev curl libblas-dev liblapack-dev gfortran libssl-dev git cmake libusb-1.0-0-dev && \
  rm -rf /var/lib/apt/lists/* && \
  apt-get clean

RUN apt update && apt-get install libglfw3-dev libgl1-mesa-dev libglu1-mesa-dev -y

RUN git clone https://github.com/IntelRealSense/librealsense.git

RUN cd librealsense && \
    mkdir build &&  \
    cd build && \
    cmake ../ -DBUILD_SHARED_LIBS=false -DBUILD_PYTHON_BINDINGS=true -DPYTHON_EXECUTABLE=/usr/bin/python3 -DCMAKE_BUILD_TYPE=Release -DOpenGL_GL_PREFERENCE=GLVND && \
    make -j4 && \   
    make install && \
    echo 'export PYTHONPATH=$PYTHONPATH:/librealsense/build/Release' >> ~/.bashrc

I noticed that hte export I used above was mentioned here: https://github.com/IntelRealSense/librealsense/issues/3062

That worked for getting python3 to find the build, but I thought that was supposed to happen automatically with the -DBUILD_PYTHON_BINDINGS=true -DPYTHON_EXECUTABLE=/usr/bin/python3 flags.

Anyway, we were able to view the depth image in the terminal using the following: cd /librealsense && rs-depth

All of the above was tested on an amd CPU. Going to use Docker buildx to cross-compile for arm and see what happens on the pi 4

MartyG-RealSense commented 6 months ago

Thanks so much for the detailed feedback. I look forward to your next report after further testing. Good luck!

Forsyth-Creations commented 6 months ago

No problem! I figure someone can use this down the line. So I was able to deploy the docker container to the Raspberry Pi 4 with no problem, and librealsense is somewhat happy. However, I'm stuck with this:

root@7fc33177ecc4:/librealsense# rs-enumerate-devices 
 04/04 15:07:38,171 ERROR [548214247456] (librealsense-exception.h:52) /dev/video10 is no video capture device Last Error: Invalid argument
 04/04 15:07:38,171 ERROR [548214247456] (sensor.cpp:661) acquire_power failed: /dev/video10 is no video capture device Last Error: Invalid argument
Could not create device - /dev/video10 is no video capture device Last Error: Invalid argument . Check SDK logs for details
 04/04 15:07:38,266 ERROR [548214247456] (librealsense-exception.h:52) /dev/video12 is no video capture device Last Error: Invalid argument
 04/04 15:07:38,266 ERROR [548214247456] (sensor.cpp:661) acquire_power failed: /dev/video12 is no video capture device Last Error: Invalid argument
Could not create device - /dev/video12 is no video capture device Last Error: Invalid argument . Check SDK logs for details
 04/04 15:07:38,367 ERROR [548214247456] (librealsense-exception.h:52) /dev/video18 is no video capture device Last Error: Invalid argument
 04/04 15:07:38,367 ERROR [548214247456] (sensor.cpp:661) acquire_power failed: /dev/video18 is no video capture device Last Error: Invalid argument
Could not create device - /dev/video18 is no video capture device Last Error: Invalid argument . Check SDK logs for details
Segmentation fault (core dumped)

Here's my compose file:

version: '3.8'
services:
  testing:
    image: forsythcreations/echo:jammyPy3.10wPyreal
    command: "tail -f /dev/null"
    privileged: true
    network_mode: host
    volumes:
      - /dev:/dev

I'm using "tail -f /dev/null" as a way to keep the container running so I can exec in and play around without fear of errors

MartyG-RealSense commented 6 months ago

These /dev/video errors have been reported a few times in the past and on all occasions it was on Raspberry Pi. See https://github.com/IntelRealSense/librealsense/issues/11843, https://github.com/IntelRealSense/librealsense/issues/12552 (which is also a Docker case) and https://github.com/IntelRealSense/realsense-ros/issues/2991

A RealSense user in the Docker case at https://github.com/IntelRealSense/librealsense/issues/12552#issuecomment-1879394310 suggested adding --device arguments to deal with it.

Forsyth-Creations commented 6 months ago

I think I'm running Debian on the Pi, but Jammy Ubuntu for the container. I wonder if this might cause a problem with kernels conflicting. Might try Ubuntu as the host machine OS and see if that resolves these issues

What is this about acquire_power failed? I noticed someone suggesting restarting the power for the PCIE lanes in https://github.com/IntelRealSense/librealsense/issues/11843

Forsyth-Creations commented 6 months ago

Hello hello hello! It works now on my end. I put the server Jammy (Ubuntu 22) as the host machine software, then ran the same container above. Works like a charm! The docker build is for --platform linux/arm/v7, though, but whatever works. I have some fun proof here too:

image

@MartyG-RealSense you're the man, I appreciate your help! Give me a few minutes to drop some other helpful docs here for future folks, then we can close this issue out

MartyG-RealSense commented 6 months ago

You are very welcome, @Forsyth-Creations - I'm pleased that I was able to help. Thanks very much for the update about your success!

Forsyth-Creations commented 6 months ago

Okay, bit of a mind dump, but here it goes. Just a brief timeline of things:

Still digging into this fix now, will report back with final findings!

MartyG-RealSense commented 6 months ago

Thanks so much for the detailed feedback. I look forward to your next report. Good luck!

MartyG-RealSense commented 5 months ago

Hi @MatthewRajan13 Do you have an update about this case that you can provide, please? Thanks!

MartyG-RealSense commented 5 months ago

Hi @MatthewRajan13 Do you require further assistance with this case, please? Thanks!

MartyG-RealSense commented 5 months ago

Case closed due to no further comments received.

Forsyth-Creations commented 5 months ago

Sorry about the delay! What we eventually did was install Ubuntu Jammy on Raspberry Pi 4, and then built a docker container using docker buildx for an arm-version of the realsense library. Things worked flawlessly after that! Thanks for all the help. If you want me to provide more context I'd be happy to

MartyG-RealSense commented 5 months ago

It's great to hear that you were successful. Thanks so much for the update and the sharing of your solution!

If you are willing to provide further details then I'm sure that it would be helpful to other Pi users with a docker container. Thanks again!

Forsyth-Creations commented 4 months ago

Sure thing! Here's the Dockerfile that produced our base image:

FROM debian:bookworm

# Set the working directory
WORKDIR /

RUN apt-get update && apt-get install ffmpeg libsm6 libxext6  -y

# PyRealsense deps

RUN apt-get update && apt-get install python3-gst-1.0 gir1.2-gst-rtsp-server-1.0 gstreamer1.0-plugins-base gstreamer1.0-plugins-ugly libx264-dev python3-opencv -y

# RUN pip install poetry==1.7.1

RUN apt-get -y update && \
  apt-get -y install python3-dev python3-distutils python3-pip libssl-dev libxinerama-dev libsdl2-dev curl libblas-dev liblapack-dev gfortran libssl-dev git cmake libusb-1.0-0-dev && \
  rm -rf /var/lib/apt/lists/* && \
  apt-get clean

RUN apt update && apt-get install libglfw3-dev libgl1-mesa-dev libglu1-mesa-dev -y

RUN git clone https://github.com/IntelRealSense/librealsense.git

WORKDIR /librealsense

RUN mkdir build

WORKDIR /librealsense/build

RUN cmake ../ -DBUILD_SHARED_LIBS=false -DBUILD_PYTHON_BINDINGS=true -DPYTHON_EXECUTABLE=/usr/bin/python3 -DCMAKE_BUILD_TYPE=Release -DOpenGL_GL_PREFERENCE=GLVND
RUN make -j4   
RUN make install

ENV PYTHONPATH=$PYTHONPATH:/librealsense/build/Release

WORKDIR /
MartyG-RealSense commented 4 months ago

@Forsyth-Creations Thanks so much!