Open portgasray opened 5 years ago
depth accurary of d435i #914
How to get RGBD Depth Image Stream? #861
is this the correct way to get depth at (x,y) using cv_bridge? #807
Python Example https://github.com/RiddhimanRaut/Ur5_Visual_Servoing/blob/master/ur5_control_nodes/src/object_detect.py
get depth from realsense camera IntelRealSense/librealsense#3460 (comment)
From pixel to world https://stackoverflow.com/questions/13419605/how-to-map-x-y-pixel-to-world-cordinates
From world to pixel https://blog.csdn.net/lyl771857509/article/details/79633412
intrinsic camera Parameters from rs-enumerate-devices -c
while color image is Width * Height = 640 * 480
PPX: 314.059814453125 PPY: 248.703369140625 Fx: 613.410034179688 Fy: 613.709289550781
intrinsic matrix (mm):
613.41003 0.00000 314.05981
0.00000 613.70928 248.70336
0.00000 0.00000 1.00000
Transforming pixel from a depth image to world coordinates
https://github.com/IntelRealSense/librealsense/issues/1904#issuecomment-398434434
How to get RGBD Depth Image Stream in ROS-realsense https://github.com/IntelRealSense/realsense-ros/issues/861
what's the difference brtween depth raw and aligned depth? #573
Align Depth to Color get depth distance #4021
How to convert a pixel to x, y in meters? https://github.com/IntelRealSense/librealsense/issues/2481#issuecomment-428651169
use message_filters.ApproximateTimeSynchronizer()
to synchronize the messages from different topics (in our case , depth_msg
and color_msg
)
Subscribe to camera_info
to get intrinsics along with other properties of the camera.
( and Dont forget to Synchronize)
use image_geometry.projectPixelTo3dRay((u,v))
API to perform deprojection in realsense-ros env.
rs::deproject_pixel_to_meters
: from the source code we know that dealing with different distortions (which is included in camra's info) is part of the deprojection despite the direct calculation of x, y
.
image_geometry.projectPixelTo3dRay((u, v))
: source code
The key point of getting depth at the exact (x, y) is to align the depth_frame to the color_frame, which can be done by adding align_depth:=true
when launching rs_camera.launch
and subscribing to camera/aligned_depth_to_color/image_raw
instead of camera/depth/image_rect_raw
https://github.com/facebookresearch/pyrobot/blob/92132a29246a7bbecb1f6b2d0170e1507704b1ea/examples/grasping/locobot.py#L102