pal-robotics / realsense_gazebo_plugin

157 stars 134 forks source link

Align Depth and Color Images #13

Closed ShrutheeshIR closed 4 years ago

ShrutheeshIR commented 4 years ago

Thanks for the repo! I have managed to get it running and I am able to get the images. I notice that the depth images are of size 720x1280 while the color images are of size 1080x1920 . (HxW). How can I align the two streams of images, to a common format? I require this, because I wish to segment objects using the color images, and mask them in the corresponding depth images. Thanks!

ShrutheeshIR commented 4 years ago

Any updates?

v-lopez commented 4 years ago

Those are the default image sizes of the camera: https://www.intelrealsense.com/depth-camera-d435/

Probably something along these lines can help you: http://wiki.ros.org/depth_image_proc

But this is not the place for such questions, you try to document yourself better and ask in ROS Answers.

Good luck!

kadhirumasankar commented 3 years ago

Hi @ShrutheeshIR and @v-lopez, I feel like this is the right place for questions like that since align_depth is a launch parameter available in realsense-ros, but it looks like it is not available in this implementation. Do you have that flag in your plugin?

themultiplexer commented 3 years ago

I also need the align_depth parameter (the topic /camera/aligned_depth_to_color/image_raw) because I want the code to work the same way in Gazebo and on the real hardware. Currently I am trying to project a color pixel to a depth point but I don't know how to get the extrinsics of the camera.

It would be way easier if the simulated color and depth image would be aligned. Right now I don't know how to align those frames without librealsense thats why I can't implement it myself.

themultiplexer commented 3 years ago

@kadhirumasankar @ShrutheeshIR I solved it with depth_image_proc as @v-lopez advised:

<node pkg="nodelet" type="nodelet" name="standalone_nodelet" args="manager" output="screen"/>
<node pkg="nodelet" type="nodelet" name="register_depth" args="load depth_image_proc/register standalone_nodelet" output="screen" respawn="false">
    <remap from="/rgb/camera_info" to="/depth_cam/color/camera_info" />
    <remap from="/depth/camera_info" to="/depth_cam/depth/camera_info" />
    <remap from="/depth/image_rect" to="/depth_cam/depth/image_raw" />
</node>
<node pkg="nodelet" type="nodelet" name="cloudify" args="load depth_image_proc/point_cloud_xyzrgb standalone_nodelet" output="screen" respawn="false">
    <remap from="/rgb/camera_info" to="/depth_cam/color/camera_info" />
    <remap from="/rgb/image_rect_color" to="/depth_cam/color/image_raw" />
    <param name="queue_size" value="10" />
</node>

After starting these nodelets /depth_registered/points is aligned to the color image /depth_cam/color/image_raw and I can combine OpenCV and PCL again. I am translating the image pixel (x, y) to a pointcloud point (col, row) but using rs2_deproject_pixel_to_point(..) might be the better approach.