Closed micat001 closed 11 months ago
Hi @micat001 can you extract the value of the offset that you get?
I suspect that the offset is generated by step 4.
If you reproject the points into the camera frame and then you set pc_msg.header.frame_id = target_frame
the result is a point cloud rendered with an offset of baseline/2
You can fix it in two ways:
pc_msg.header.frame_id
to the camera frame and not the target frame which in your case is the optical frame.A bit confused by this:
If you reproject the points into the camera frame and then you set pc_msg.header.frame_id = target_frame the result is a point cloud rendered with an offset of baseline/2
I don't understand this, unless I'm misunderstanding the pointcloud frame and generation. There doesn't seem like there should be an offset from the left_camera_frame -> left_camera_optical_frame (and vice versa) right? Just an axes flip. The point_cloud_registered message has a frame_id of left_camera_frame
and the depth has a frame of left_camera_optical_frame
. Why can I not transform between the two?
For your first comment if i manually tweak the translation/rotation tf until it seems like it overlaps, it looks something like this:
from tf.transformations import compose_matrix
tf_vector = [0.000, 0.15, -0.010, -1.521, -0.000, -1.571]
tf_matrix = compose_matrix(cam_pose[:3], angles=cam_pose[3:])
It's still not quite aligned, but closer:
As for option 1 - I'd love to not reproject the cloud (assuming I can be confident of the frame of the point cloud message) but my issue is that I am running a custom detector on the /left/image_rect_color
topic. Once I have detected an object, I'd like to crop it from the point cloud for further 3D processing. To do this, I need to go from Image [u, v] -> point_cloud[x, y, z]. I am currently doing this by the process defined above (image[u, v] == depth[u, v] -> reproject(depth)-> [x, y, z]). If there's a better way to do this, I'm all ears!
Thanks for your prompt responses @Myzhar!
*Edit. Removed as I think about the minimum working example that I'd need to solve this.
How, using the published ROS topics, how can I take a [u, v] point in the color image (is the left_rect image the one to use?) and find the x, y, x point in the left camera frame. I've seen a few other issues that offer a solution like this where you can look-up the [u, v] point in the pointcloud data but I find this a bit hard to work with because not all depth points are valid.
The issue is directly related to the resolution of the ZED.
When I set the ZED to the highest resolution (resolution: 0 in the config file) the reprojection is right on. For each other resolution, the reprojection is way off. This leads me to believe that I cannot use the intrinsics from the CameraInfo message from the registered depth image to do the reprojection.
@Myzhar - Can you chime in with why this might be resolution dependent?
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment otherwise it will be automatically closed in 5 days
Commenting to keep this issue alive.
The issue was in Step 3, the extraction of the intrinsics:
fx = intrinsic_array[0, 0]
cx = intrinsic_array[0, -1] # was labelled "fy"
fy = intrinsic_array[1, 1] # was labelled "cx"
cy = intrinsic_array[1, -1]
Preliminary Checks
As a note: this is listed as a "bug" because there doesn't seem to be a "questions" label (although it would fall more into that category)
Description
I'm having issues using the returned depth image and reprojecting it to a point cloud. I'm seeing an offset in the y-direction (relative to the zed2 left_camera_frame):
There's a small offset shown here:
Steps to Reproduce
I'm reading in the depth image and using the depth/camera_info to reproject the points to (x, y, z) like this:
I've always struggled to get color properly into a python pc2 message, so i just left it as (x, y, z).
Expected Result
I'd expect the points to overlap pretty much perfectly, unless there's some magic step that ZED is doing in their API to get the (x, y, z) from depth.
Actual Result
A visible offset between the zed registered pointcloud and my published pointcloud.
ZED Camera model
ZED2
Environment
Anything else?
No response