Closed miguelriemoliveira closed 7 years ago
Hi,
I looked at the issue you linked, and the registration seems to be ok. But you have to make sure that you are using the correct pair of topics for the mesh. /kinect2/hd/image_color_rect
and /kinect2/hd/image_depth_rect
are used to calculate the point cloud, so if you what a color image where the pixels belong to the points in the point cloud, you have to use /kinect2/hd/image_color_rect
and not /kinect2/hd/image_color
.
Hi Thiemo,
Thanks for the clarifications.
Concerning the second issue, I checked. I am using the image_color_rect and the image_depth_rect topics. Here's the portion of my launch file that produces the point clouds:
<!-- Create point clouds for each scan. Adapted from the kinect2_bridge/launch/kinect2_bridge.launch.
Only hd point cloud (1920 x 1080) -->
<node pkg="nodelet" type="nodelet" name="standalone_nodelet" args="manager" output="screen"/>
<node pkg="nodelet" type="nodelet" name="$(arg base_name)_points_xyzrgb_hd" args="load depth_image_proc/point_cloud_xyzrgb standalone_nodelet" output="screen">
<remap from="rgb/camera_info" to="$(arg base_name)/hd/camera_info"/>
<remap from="rgb/image_rect_color" to="$(arg base_name)/hd/image_color_rect"/>
<remap from="depth_registered/image_rect" to="$(arg base_name)/hd/image_depth_rect"/>
<remap from="depth_registered/points" to="$(arg base_name)/hd/points"/>
<param name="queue_size" type="int" value="1"/>
</node>
The problem must be somewhere else. Thanks for the help, I will close the issue.
Miguel
First of all I am sorry if my question has an obvious response, but I really have to clarify this.
The context: we are trying to use meshlab to show a point cloud taken from the kinects and the corresponding image, but we are having alignment problems, meaning that the point cloud data and the image do not overlay correctly as they should. See here the full issue:
https://github.com/cnr-isti-vclab/meshlab/issues/80#issuecomment-283090154
Note we have calibrated the kinect using the procedures described here. As far as we can tell, we get reasonable values for reprojection error, extrinsincs, intrinsics etc.
We assumed that, since the ros iai kinect drivers already do depth registration, that the point cloud that we get is already registered with the rgb optical frame (the frame_id field of the point cloud message says camera_rgb_optical_frame). Thus we thought that, in order to represent both the point cloud data and the image we just have to say where the camera is w.r.t. the point cloud and, since the point cloud was registered to the rgb image, the extrinsic parameters for this transformation would be:
Rotation = identity matrix Translation = zero translation
This was our first assumption but now after discussing with the meshlab people we are really not sure anymore. They say there should be some non zero values in the translation since there is a (around 5cm) distance between the rgb and the ir sensor.
Now for the questions:
Thank you for any help you can provide,
Miguel