Closed ChunJyeBehBeh closed 5 years ago
I am also looking for the depth scale in order to recover the actual depth from the values published in /camera/aligned_depth_to_color/image_raw
@ChunJyeBehBeh Have you found any pointers for that?
Below is the code that I tested today, I have write a node and convert the ROS image to OpenCV format.
#!/usr/bin/env python
import rospy
import cv_bridge import CvBridge, CvBridgeError
from sensor_msgs.msg import Image
from numpy as np
def convert_depth_image(self, ros_image):
bridge = CvBridge()
# Use cv_bridge() to convert the ROS image to OpenCV format
try:
#Convert the depth image using the default passthrough encoding
depth_image = cv_bridge.imgmsg_to_cv2(ros_image, deside_encoding="passthrough")
except CvBridgeError, e:
print e
#Convert the depth image to a Numpy array
depth_array = np.array(depth_image, dtype=np.float32)
rospy.loginfo(depth_array)
def pixel2depth():
rospy.init_node('pixel2depth',anonymous=True)
rospy.Subscriber("/camera/aligned_depth_to_color/image_raw", Image,callback=convert_depth_image, queue_size=1)
rospy.spin()
if __name__ == '__main__':
pixel2depth()
(You need to tidy the format of the code when you copy n paste to your IDLE.)
You can try to write a for loop to display the middle part of the array. If you put a plane in front of your camera and parallel to the camera, you will see the number in the array will change accordingly when you move your plane. That number should be the real world distance between the plane and your camera ( which unit is in mm).
I don't know whether I am subscribing the correct topic. I have try to subscibed to "/camera/aligned_depth_to_color/image_raw" , but it report some errors to me.
Correct me if you find any wrong.
I think you subscribe to the correct topic. What are the errors? rostopic echo will not be much help since the values in the depth topic are in uint16. The Depth scale is 1mm. ROS convention for uint16 depth image.
@doronhi When I subscribe to "/camera/aligned_depth_to_color/image_raw"
and
"/camera/depth/image_rect_raw"
, both give me the same output. may I know the data meaning behind these topics?
Besides, may I ask few more questions?
1. Resolution from 1920x1080 -> 640x480 The resolution changes is because of cropped or resized of the image? When I subscribe the topic and get the certain depths of a pixel from depth_array, although I cannot see the object from the rqt_image_view (RGB camera), I still can get the depth of the object. So I m curious whether it is by cropped or resized?
2. What is the correct matching way from pixel coordinate of RGB Image to the depth_array of depth/image_rect_raw e.g. If the resolution of my RGB Image is 640x480 and pixel coordinate of a certain point is (245,370) [in pixel], what is the corresponding index of that point from the depth array?
Thank you in advance.
It is still not clear to me what scale is used to convert the 0-255 values I get in the depth topics to mm values. The unit16 can go up to 65,535, but the maximum I get is 255.
It is still not clear to me what scale is used to convert the 0-255 values I get in the depth topics to mm values. The unit16 can go up to 65,535, but the maximum I get is 255.
From what I know, you need to use cv_bridge() to convert the value into opencv value. So you will get the depth in mm. Try to run the code I posted above and check the result.
@mzahana , The values are not 0-255. You only see that if you convert the data to char, like echo
does.
@ChunJyeBehBeh , what do you mean "they both give me the same output"? /camera/aligned_depth_to_color/image_raw
is supposed to be aligned to the color image? When you put them side by side do they look the same? You can use colorizer
filter to make it easier to watch using rqt_image_view
.
@mzahana the unit16 data is written on two uint8 so you have to append the lsb and the msb. Use the is_bigendien in the object sensor_msgs/Image you receive if you want to know if it's lsb or msb first. Once you reconstruct the uint16 you will have the distance in mm
Works fine now. I am able to convert the depth image to point cloud. I converted the ros depth image to OpenCV image and did the depth to 3D point mapping.
to use cv_bridge() to
Hy, Im little bit confuse, i'm not really understanding what we have to do whit the above code ? Im also using currently intel realsense D435(not "i") and i want print my depth value in mm !
After downloaded your code and put in src from realsense_camera, what i have to do ?
Thx u, sorry i'm really lost
@Vivoslespagnos , The code above is a python code. It has some minor errors so I post a fixed code. my_depth_subscriber.txt
roslaunch realsense2_camera rs_camera.launch align_depth:=true
python my_depth_subscriber.py
Ok, so thx u very much, i suceed thank to your indication !
Note : i try to calibrate (by test) to verify the value and i find something rather be just until 3 meters, after that they is some error around 10-20 cm, is just an observation to considere,
Hi, I had a quick question about this. My understanding is that launching the RealSense camera using roslaunch realsense2_camera rs_camera.launch align_depth:=true
creates a /camera/aligned_depth_to_color/image_raw
topic, is that correct? This is what I'm looking for, but I'm currently launching a D435 camera on a drone inside Gazebo using a .launch file that looks like the following:
<?xml version="1.0"?>
<launch>
<arg name="vehicle" default="iris"/>
<arg name="sdf" default="$(find mavlink_sitl_gazebo)/models/iris_d435/iris_d435.sdf"/>
<include file="$(find px4)/launch/mavros_posix_sitl.launch">
<arg name="respawn_gazebo" value="true"/>
<arg name="respawn_mavros" value="true"/>
<arg name="vehicle" value="$(arg vehicle)"/>
<arg name="sdf" value="$(arg sdf)"/>
<arg name="verbose" value="true"/>
</include>
</launch>
Is there an align_depth
argument I can modify? If so, where would I do it?
Hi, @kadhirumasankar, the 'align_depth' is a boolean argument. So you can only choose 'true' or 'false' in the command line: roslaunch realsense2_camera rs_camera.launch align_depth:=true
.
And other parameters (like image resolutions) can be changed in the 'rs_camera.launch' file in the 'launch' folder at the installed folder of 'realsense2_camera'. The installed folder can be found by 'roscd realsense2_camera' command in a terminal.
About the physical unit of the pixel values of the depth image from L515 lidar:
In the depth image topic /camera/depth/image_rect_raw
, the unit of the pixel values seems to be mm already because the minimum value is about 500 (50 cm) except 0s (no value).
And the topic data should be converted uint16_t
from uint8_t
using pointer type cast to get correct values.
@Vivoslespagnos , @MartyG-RealSense
The code mentioned in my_depth_subscriber.txt is converting into array and obtaining the depth. How can this be modified to obtain the distance to a particular pixel in the image with known x & y coordinates.
depth_image = bridge.imgmsg_to_cv2(ros_image, "passthrough") depth_array = np.array(depth_image, dtype=np.float32) center_idx = np.array(depth_array.shape) / 2 print ('center depth:', depth_array[center_idx[0], center_idx[1]]) This gives the distance to the centre of the image. However how to obtain the distance of any other pixel.....
Instead I did this: depth_image = bridge.imgmsg_to_cv2(ros_image, "passthrough") depth_array = np.array(depth_image, dtype=np.float32) print 'center depth:', depth_array[[cy], [cx]]
I can see the output. except for an error when there is no value obtained for cx, cy.... Is there any other alternative?
@rakeshjohn123 May I know what output you see? If you have correctly convert into depth_array and want to access the depth value of (pixel_x,pixel_y), you can replay the last line into depth_array[pixel_y, pixel_x].
If you are using C++, you can do it in this way. link
The output is:
center depth: 430
The output value seems to be in mm and is appropriate. However there is no data available when the distance is less than 250 mm. The output remains 0 for the distance from 0 - 250mm.
@bigbellmercy Thanks for your reply. My issue was that I was using the RealSense camera on a drone inside a Gazebo simulation environment. I had to use a Gazebo plugin made by a third-party to use the RealSense camera, and they hadn't implemented the align_depth
flag. I fixed my issue by setting the horizontal FOV of the RBG and depth cameras to be the same value. On an actual camera, I should be able to use the align_depth
flag.
@rakeshjohn123 If I'm correct, the camera's launch file should have the minimum and maximum range of the RealSense camera. I'm pretty sure the minimum range is set to 0.25m and the maximum range is set to 10m. @ChunJyeBehBeh Please correct me if I'm wrong
The launch file does not have such parameters (minimum and maximum range). I am using rs_camera.launch
deside_encoding="passthrough")
Just mentioning the above typo in the code snippet of the 3rd post in this thread. It is supposed to be desired_encoding="passthrough"
It is still not clear to me what scale is used to convert the 0-255 values I get in the depth topics to mm values. The unit16 can go up to 65,535, but the maximum I get is 255.
I too get the same, don't have any idea to convert to meters
Hi @hariharan382 My understanding is that the 0-255 scale arises from converting RealSense's uint16_t 16-bit depth values (which have a value scale of 0-65535) to OpenCV's 8-bit format (with a value scale of 0-255).
The C++ code below provides a simple demonstration of a conversion of RealSense's uint16_t depth values to an 8-bit cv::mat image.
// Query frame size (width and height)
const int w = depth.as<rs2::video_frame>().get_width();
const int h = depth.as<rs2::video_frame>().get_height();
// Create OpenCV matrix of size (w,h) from the colorized depth data
Mat image(Size(w, h), CV_8UC3, (void*)depth.get_data(), Mat::AUTO_STEP);
OpenCV's cv::mat values can be converted back to the RealSense SDK's rs2::frame depth format, as discussed at https://github.com/IntelRealSense/librealsense/issues/2634
I believe that it would be necessary to perform such a conversion to obtain a meaningful depth value in meters, but you should be able to get a rough idea from an OpenCV depth image's color shades what depth range they represent if you consider 0 as the darkest color and 255 as the lightest.
Hello,
roslaunch realsense2_camera rs_camera.launch align_depth:=true
I would like to get the depth(in mm) of a given pixel coordinate. May I know how to achieve this?
I have try to echo this topic
/camera/aligned_depth_to_color/image_raw
but I don't understand the meaning behind it. It seems like all the value between 0 -255.Besides, there are so many topic behind it. May I know the meaning of each topic?
There is some parameter matrix (D,K,R,P). That mean I don't need to do a camera calibration for my RealSense D435 camera already? Where is the world coordinate respect to R?