Closed Xonxt closed 4 years ago
@Xonxt, the assumption is correct - alignment operation modifies both the depth data and the camera matrix (intrinsic+extrinsic) of the processed frame and makes them similar (hence align) to the target stream; RGB in your case. To answer the second part of your question - de-projecting depth pixels from the aligned depth frame generates x,y,z coordinates with (0,0,0) being the the origin of the RGB sensor. I.e all 2D/3D depth pixels/points are already in the color sensor's viewpoint. #4536 , #4315
@ev-mp, so that means that when the depth and color frames are aligned and I'm extracting the (x,y,z) coordinates that way, these 3D-coordinates are already given relative to the color-sensor? Got it, thanks. This is helpful.
And if I wanted to return them back to the depth (or IR?) sensor's viewpoint, as it was before the alignment, while still using the alignment, I need to use the rs2_transform_point_to_point()
with the color_to_depth
extrinsic? Or the color_to_infrared
one?
color_to_depth
or color_to_infrared1
if exist.
I am using the frame alignment, to align the depth frame to the color frame, i.e.:
align = rs2.align(rs2.stream.color)
I then need to get the 3D coordinates of points (long story short, I run a hand-tracking algorithms and I need to send the 3D coordinates somewhere).
I'm just doing a simple:
and that's it.
But, if I'm running the frame alignment, does it somehow affect the 3D coordinates? Do I need to do something additionally (like, maybe, use the
rs2_transform_point_to_point()
function to transform the 3D point into the color camera's viewpoint)?Thank you.