IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.63k stars 4.83k forks source link

Are verts data generate from rs.pointcloud the same from using rs2_deproject_pixel_to_point? #13529

Open ysj2009 opened 1 hour ago

ysj2009 commented 1 hour ago

Hi, I am trying to convert the pixels of segmentation result (some area of an image) to camera coordinate and then to a 2d grid map, I have been trying to use the function rs2_deproject_pixel_to_point to do this, but it is too slow because there are so many pixels to process. Later I found out about the opencv_pointcloud_viewer.py

        points = pc.calculate(depth_frame)
        pc.map_to(mapped_frame)

        # Pointcloud data to arrays
        v, t = points.get_vertices(), points.get_texture_coordinates()
        verts = np.asanyarray(v).view(np.float32).reshape(-1, 3)  # xyz

which convert the depth_frame to pointcloud data, in it the verts data in it looks like what I needed but I am not sure. Or is there a better way to do it? Thanks.

MartyG-RealSense commented 1 hour ago

Hi @ysj2009 They are not the same but have a similar result.

Using pc.map_to and pc.calculate instead of rs2_deproject_pixel_to_point provides more accurate results if depth to color alignment is being used. In practice though, the difference in accuracy between the two methods is small. If pc.calculate works well for you then I would certainly recommend continuing to use it.

The only better method than pc.calculate in terms of performance speed would be rs2_project_color_pixel_to_depth_pixel, which provides a depth value for a single coordinate by converting an RGB color pixel into a depth pixel, so that you do not need to burden the computer with processing by depth-color aligning the entire image.