IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.45k stars 4.8k forks source link

Regarding obtaining IMU data from the D455 in Python #13129

Open hahahah6 opened 1 week ago

hahahah6 commented 1 week ago

Required Info
Camera Model D455
Firmware Version 5.16.0.1
Operating System & Version ubuntu22.04
Kernel Version (Linux Only) 6.5.0-35-generic
Platform ubuntu
SDK Version v2.55.1
Language python
Segment others

Issue Description

I want to obtain data from the IMU. I am using the following code

## License: Apache 2.0. See LICENSE file in root directory.
## Parts of this code are
## Copyright(c) 2015-2017 Intel Corporation. All Rights Reserved.

##################################################
##      configurable realsense viewer           ##
##################################################

import pyrealsense2 as rs
import numpy as np
import cv2
import time

#
# NOTE: it appears that imu, rgb and depth cannot all be running simultaneously.
#       Any two of those 3 are fine, but not all three: causes timeout on wait_for_frames()
#
device_id = None  # "923322071108" # serial number of device to use or None to use default
enable_imu = True
enable_rgb = True
enable_depth = True
# TODO: enable_pose
# TODO: enable_ir_stereo

# Configure streams
if enable_imu:
    imu_pipeline = rs.pipeline()
    imu_config = rs.config()
    if None != device_id:
        imu_config.enable_device(device_id)
    imu_config.enable_stream(rs.stream.accel, rs.format.motion_xyz32f, 63) # acceleration
    imu_config.enable_stream(rs.stream.gyro, rs.format.motion_xyz32f, 200)  # gyroscope
    imu_profile = imu_pipeline.start(imu_config)

if enable_depth or enable_rgb:
    pipeline = rs.pipeline()
    config = rs.config()

    # if we are provided with a specific device, then enable it
    if None != device_id:
        config.enable_device(device_id)

    if enable_depth:
        config.enable_stream(rs.stream.depth, 848, 480, rs.format.z16, 60)  # depth

    if enable_rgb:
        config.enable_stream(rs.stream.color, 424, 240, rs.format.bgr8, 60)  # rgb

    # Start streaming
    profile = pipeline.start(config)

    # Getting the depth sensor's depth scale (see rs-align example for explanation)
    if enable_depth:
        depth_sensor = profile.get_device().first_depth_sensor()
        depth_scale = depth_sensor.get_depth_scale()
        print("Depth Scale is: ", depth_scale)
        if enable_depth:
            # Create an align object
            # rs.align allows us to perform alignment of depth frames to others frames
            # The "align_to" is the stream type to which we plan to align depth frames.
            align_to = rs.stream.color
            align = rs.align(align_to)

try:
    frame_count = 0
    start_time = time.time()
    frame_time = start_time
    while True:
        last_time = frame_time
        frame_time = time.time() - start_time
        frame_count += 1

        #
        # get the frames
        #
        if enable_rgb or enable_depth:
            frames = pipeline.wait_for_frames(200 if (frame_count > 1) else 10000) # wait 10 seconds for first frame

        if enable_imu:
            imu_frames = imu_pipeline.wait_for_frames(200 if (frame_count > 1) else 10000)

        if enable_rgb or enable_depth:
            # Align the depth frame to color frame
            aligned_frames = align.process(frames) if enable_depth and enable_rgb else None
            depth_frame = aligned_frames.get_depth_frame() if aligned_frames is not None else frames.get_depth_frame()
            color_frame = aligned_frames.get_color_frame() if aligned_frames is not None else frames.get_color_frame()

            # Convert images to numpy arrays
            depth_image = np.asanyarray(depth_frame.get_data()) if enable_depth else None
            color_image = np.asanyarray(color_frame.get_data()) if enable_rgb else None

            # Apply colormap on depth image (image must be converted to 8-bit per pixel first)
            depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET) if enable_depth else None

            # Stack both images horizontally
            images = None
            if enable_rgb:
                images = np.hstack((color_image, depth_colormap)) if enable_depth else color_image
            elif enable_depth:
                images = depth_colormap

            # Show images
            cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)
            if images is not None:
                cv2.imshow('RealSense', images)

        if enable_imu:
            accel_frame = imu_frames.first_or_default(rs.stream.accel, rs.format.motion_xyz32f)
            gyro_frame = imu_frames.first_or_default(rs.stream.gyro, rs.format.motion_xyz32f)
            print("imu frame {} in {} seconds: \n\taccel = {}, \n\tgyro = {}".format(str(frame_count), str(frame_time - last_time), str(accel_frame.as_motion_frame().get_motion_data()), str(gyro_frame.as_motion_frame().get_motion_data())))

        # Press esc or 'q' to close the image window
        key = cv2.waitKey(1)
        if key & 0xFF == ord('q') or key == 27:
            cv2.destroyAllWindows()
            break

finally:
    # Stop streaming
    pipeline.stop()

but it reports the following error:

intel@intel:~/code/1$ python3 1.py 
Traceback (most recent call last):
  File "/home/intel/code/1/1.py", line 34, in <module>
    imu_profile = imu_pipeline.start(imu_config)
RuntimeError: Couldn't resolve requests
MartyG-RealSense commented 1 week ago

Hi @hahahah6 If you have the RuntimeError: Couldn't resolve requests error in response to attempting to configure the Accel stream to a speed of '63' then you may have a D455 that was manufactured after mid 2022, as the minimum supported Accel speed on those modern D455 models changed from 63 to 100 due to a change of IMU component. Please try changing 63 to 100 in your Accel IMU config line.

imu_config.enable_stream(rs.stream.accel, rs.format.motion_xyz32f, 100) # acceleration

You do not need to change the Gyro speed of 200.

jeezrick commented 1 week ago

Hi, when I set both gyro and accel to 200, I end up get imu data at 400 fps. And when I set accel to 100, gyro to 200, I end up get imu data at 300 fps. This seems a little weird to me, what's the relation between the parameter in enable_stream func and the final real fps? I use D455, and basicly same code as this post.

Update: when I set gyro to 400, accel to 200, I get 600 final fps imu data. So they add up, why is that? Is it the camera get imu data interleave? Like when getting gyro data, accel data stay still, and vice versa, which end up add their fps to the final fps? @MartyG-RealSense

hahahah6 commented 1 week ago

Okay, thank you for your comments. I have obtained the IMU data, but it has been affected by gravity. How can I eliminate the influence?

hahahah6 commented 1 week ago

I have another problem,I used the rs-motion code, but it reported the following error:

 Error: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (6,) + inhomogeneous part.
MartyG-RealSense commented 1 week ago

In the rs-motion C++ example, you can adjust an alpha value to give more weight to the gyro than the accelerometer (which takes gravity into account).

https://github.com/IntelRealSense/librealsense/blob/master/examples/motion/rs-motion.cpp#L117-L118

If you installed the RealSense SDK's tools and examples, do you experience that error if you run the pre-built executable version of the rs-motion example? You should be able to find it in the usr/local/bin folder of Ubuntu if you have not used it already.

hahahah6 commented 1 week ago

I'm very sorry, I wrote the wrong file, it should be this rs-imu-calibration.py file. it reported the following error:

 Error: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (6,) + inhomogeneous part.
MartyG-RealSense commented 1 week ago

The error The requested array has an inhomogeneous shape has not been previously reported in regard to the rs-imu-calibration script and so there is not a solution available, unfortunately. It is likely though that your camera already has a very good calibration that was performed in the factory and so the rs-imu-calibration.py script does not need to be used.

These days, a function called enable motion correction is enabled by default and 'fixes' raw IMU data so that its values are more correct.

MartyG-RealSense commented 2 days ago

Hi @hahahah6 Do you require further assistance with this case, please? Thanks!