Closed svarnypetr closed 6 years ago
Probably not the best interface indeed.
If you try:
dev.deproject_pixel_to_point(np.array([320, 240], dtype=np.float32), d[240, 320])
dev.points[240, 320]
you should get approximately the same results, which means you can deproject a depth pixel given its coordinate and depth into a 3d point.
Starting from the dac stream though, the transformation provided by deproject_pixel_to_point
is not enough. It expects a point in depth space while you provide a coordinate in rgb space. I think you need to do one more transform in the same fashion as librealsense
rs_transform_point_to_point or use cad stream.
Hello, great. I see now my trivial mistake (noticed the inversed x, y coordinates). I use currently cad with the depth stream.
When I attempt to deproject a pixel, I am getting really weird X, Y coordinates. I am not sure whether this is a problem on the part of pyrealsense or librealsense itself. I found an issue there that discussed a similar topic. But I am actually working with a dac stream already.
My minimal python code is the following:
The output then looks like this:
The z axis/distance corresponds to what I see in my stream. However, I do not understand the x, y values. The SDK documentation suggests the [0, 0, 0] coordinate should be at the sensor. If I am looking at a central pixel of the image (pixel [240, 320] of a 480x640 image), I would not expect that in the point coordinate system the result is 364 and 260 mm off. What then confuses me even more is that the x and y values grow with the distance of the object.
Thank you for any help or clarification where I am mistaken.