introlab / rtabmap_ros

RTAB-Map's ROS package.
http://wiki.ros.org/rtabmap_ros
BSD 3-Clause "New" or "Revised" License
976 stars 557 forks source link

questions about mapData assembling & projecting /rtabmap/cloud_map to depth image #402

Closed dwbzxc closed 4 years ago

dwbzxc commented 4 years ago

Hi.I got two questions. 1.I know that mapData only stores data in current frame,however,it can be assembled continuously if we keep subscribing it in RVIZ.The point cloud generated in this way seems denser than point cloud in /rtabmap/cloud_map.So I'm curious about if there is a way to utilize point cloud assembled from mapData?Or will it be possible to get dense point cloud online?

2.I try to project point cloud in /rtabmap/cloud_map into a depth image(640*480),the point cloud I want is cloud in front of the camera when camera moves.I used the function

cv::Mat projectCloudToCamera(const cv::Size & imageSize,const cv::Mat & cameraMatrixK, const pcl::PointCloud::Ptr laserScan, const rtabmap::Transform & cameraTransform)

in

/rtabmap/corelib/src/util3d.cpp

(sorry for not familiar with code inserting...).By the way,I used d435 to run rtabmap so I set the cv::size as cv::Size size(640,480);,cameraMatrixK as the intrinsic parameter of d435:cv::Mat M4 = (cv::Mat_<double>(3, 3) << 613.1063232421875, 0, 323.24359130859375, 0, 611.8062744140625, 248.30075073242188, 0, 0, 1);,laserScan as cloud transfered from /rtabmap/cloud_map without rgb and camera Transform I just use

rtabmap::Transform camera_transform(0.0,0.0,0.0,0.0,0.0,0.0,1.0)

to have a test(no transformation).I am sure the cv::size,cameraMatrixK and cameraTransform are right in format.However,the dx_low,dy_low,dx_high,dy_high I got in functionprojectCloudToCamera were always larger than the image size(640*480),so none of those points were added into depth image.I don't know what goes wrong.Will it be the point cloud coordinate or camera_transform problem?

Thanks in advance.

dwbzxc commented 4 years ago

Some output of function projectCloudToCamera for reference:

ptScan.x .y .z= 3.862653 -0.520431 0.979572
ptScan after transform.x .y .z= 3.862653 -0.520431 0.979572
dx_low = 2740,dy_low = -76,dx_high = 2741,dy_high = -76

ptScan.x .y .z= 3.680488 -0.322796 0.972795
ptScan after transform.x .y .z= 3.680488 -0.322796 0.972795
dx_low = 2642,dy_low = 45,dx_high = 2643,dy_high = 45

Most of data is like this,no dx_low,dy_low,dx_high,dy_high match the demand of size.

matlabbe commented 4 years ago

To get a denser cloud_map, we can set Grid/CellSize to 0.01 meter for example (default 0.05 m).

For the projection problem, camera_transform should include pose in the map and the local transform between base frame and the camera optical frame. To make it easier, you could use pointcloud_to_depthimage node for convenience like in this tutorial:

$ rosrun rtabmap_ros pointcloud_to_depthimage \
   cloud:=/rtabmap/cloud_map \
   camera_info:=/camera/color/camera_info \
   _fixed_frame_id:=map

However, it is not very efficient to reproject the whole map to create a depth image (than taking the depth image directly from your d435). Note sure what you are trying to achieve for this part.

cheers, Mathieu

dwbzxc commented 4 years ago

I thought that reprojected depth images would have better density or more accurate mapping of the real world than depth images from d435.Now it seems that directly using the depth image from d435 is better.

Anyway,pointcloud_to_depthimage node works well!

Thanks for your replying!

matlabbe commented 4 years ago

With small Grid/CellSize, projecting the map's cloud into camera could improve the range of the depth image (assuming we already mapped from a closer viewpoint the area we are looking at), but it would require quite much time to do the projection and if there are errors in the map, the depth image will also be wrong.

cheers, Mathieu

dwbzxc commented 4 years ago

You are right.It seems better to use depth image from camera directly. Thanks!