ros-industrial / industrial_reconstruction

Tools for surface reconstruction from 2D depth images
Apache License 2.0
51 stars 17 forks source link

how to improve reconstruction result #5

Closed dbdxnuliba closed 1 year ago

dbdxnuliba commented 1 year ago

I use the branch https://github.com/gavanderhoorn/industrial_reconstruction/tree/ros1 ROS: noetic

the result is like the following png: image

We can see that the pose of /industrial_reconstruction_mesh produced is not right

start the test following the steps:

when I launch camear realsenseL515 mounted at the end of franka robot, and pub it's tf between camera_link and robot_end_link,

and then roslaunch industrial_reconstruction reconstruction.launch

the launch file: roslaunch industrial_reconstruction reconstruction.launch depth_image_topic:=/camera/aligned_depth_to_color/image_raw color_image_topic:=/camera/color/image_raw camera_info_topic:=/camera/color/camera_info

and then call service "/start_reconstruction" with rqt tools, image

the tf_tree is as follwoing frames

image

and this is the rqt_graph ros_graph

the topic published by camea L515 with node realsense2_ros image

but when I launch again ,the mesh it produced is like that: image

Question: how can I get the right reslut, what params can improve the precision of the out mesh any help is pleasured! thanks a lot

marrts commented 1 year ago

Make sure that you have the transform from the robot_end_link to the camera_link correct. You can do a simple check of this by looking at the live point cloud in RVIZ and making sure it looks reasonable as you move the robot around. If that looks consistent then you should be able to get an okay mesh. The result will be best when the camera position is calibrated.

Also, from looking at your service call I can't see what you are defining as the relative_frame which should be a base frame that is completely static like world. If this is wrong it could result in a problematic mesh.

dbdxnuliba commented 1 year ago

I has solved the tf problem by use camera_depth_optical_frame published by realsense2_camera image

dbdxnuliba commented 1 year ago

image tracking_frame:camera_depth_optical_frame relative_frame: base_link I use D435 and robot arm

2'

image

Another question:

sometimes the terminal show the err info about 1667885837.482152939, when looking up transform from frame [camera3_depth_optical_frame] to frame [base_link] [ERROR] [1667885847.561280]: Failed to get transform: Lookup would require extrapolation 23.407705068s into the past. Requested time 1667885814.148482800 but the earliest data is at time 1667885837.556187868, when looking up transform from frame [camera3_depth_optical_frame] to frame [base_link] [ERROR] [1667885847.628029]: Failed to get transform: Lookup would require extrapolation 23.425324440s into the past. Requested time 1667885814.214805365 but the earliest data is at time 1667885837.640129805, when looking up transform from frame [camera3_depth_optical_frame] to frame [base_link] [ERROR] [1667885847.695320]: Failed to get transform: Lookup would require extrapolation 23.336752415s into the past. Requested time 1667885814.347431183 but the earliest data is at time 1667885837.684183598, when looking up transform from frame [camera3_depth_optical_frame] to frame [base_link] [ERROR] [1667885847.761518]: Failed to get transform: Lookup would require extrapolation 23.278105020s into the past. Requested time 1667885814.480031252 but the earliest data is at time 1667885837.758136272, when looking up transform from frame [camera3_depth_optical_frame] to frame [base_link] [INFO] [1667885847.933052]: debug here tsdf_volume.integrate

here is the gif about the info: info_err

memory usage of the node increases quickly image

Question: 1.how can I solve the [ERROR] [1667885847.695320]: Failed to get transform: Lookup would require extrapolation 23.336752415s into the past. Requested time 1667885814.347431183 but the earliest data is at time 1667885837.684183598, when looking up transform from frame [camera3_depth_optical_frame] to frame [base_link] 2.why the mesh update rate is about once about per 30s 3.the memory usage of the node increases quickly,how can we improve it 4.if I want to improve the performance of the reconstruction ,which params are he relative or what can I do, Any helps is pleasured

marrts commented 1 year ago

1.how can I solve the [ERROR] [1667885847.695320]: Failed to get transform: Lookup would require extrapolation 23.336752415s into the past. Requested time 1667885814.347431183 but the earliest data is at time 1667885837.684183598, when looking up transform from frame [camera3_depth_optical_frame] to frame [base_link]

I'm not sure why this is happening. Perhaps the node is getting a backlog of old images and it is trying to look up a transform from a long time ago because of that. Maybe try reducing the queue size of the subscriber

2.why the mesh update rate is about once about per 30s

The mesh only updates every 50 frames or so even though it is actually constantly integrating the mesh together, this is to save computation time. Updating the mesh for every image would be too slow. You can set live to false and it will never update the mesh in RVIZ until you call stop, and it could result in a better mesh.

3.the memory usage of the node increases quickly,how can we improve it 4.if I want to improve the performance of the reconstruction ,which params are he relative or what can I do, Any helps is pleasured

This is probably from integrating all the images together to make a mesh. I think you are probably making too big of a mesh because you are incorporating so much data. I'd recommend reducing your depth_trunc to less, maybe 1.5 meters or so. This is the cutoff for depth data to be included in your mesh. By setting it to 3.0 meters you are including a large amount of data with every frame, which will make the process slower and the data including at this far distance is going to be less accurate. Increasing voxel_length will lead to faster mesh generation and better handling of a large mesh if you want to capture a large area like this, but it will reduce the precision of the mesh being created.

dbdxnuliba commented 1 year ago

thanks for you response. dou you mean the param "cache_count can be more small, the smaller the cache_count,it will caculate more quicky, isn't it?

I'm not sure why this is happening. Perhaps the node is getting a backlog of old images and it is trying to look up a transform from a long time ago because of that. Maybe try reducing the queue size of the subscriber

dbdxnuliba commented 1 year ago

This is probably from integrating all the images together to make a mesh. I think you are probably making too big of a mesh because you are incorporating so much data. I'd recommend reducing your depth_trunc to less, maybe 1.5 meters or so. This is the cutoff for depth data to be included in your mesh. By setting it to 3.0 meters you are including a large amount of data with every frame, which will make the process slower and the data including at this far distance is going to be less accurate. Increasing voxel_length will lead to faster mesh generation and better handling of a large mesh if you want to capture a large area like this, but it will reduce the precision of the mesh being created.

rosservice call /start_reconstruction "tracking_frame: 'camera3_depth_optical_frame' relative_frame: 'base_link' translation_distance: 0.0 rotational_distance: 0.0 live: true tsdf_params: voxel_length: 0.01 sdf_trunc: 0.02 min_box_values: {x: -3.0, y: -3.0, z: -3.0} max_box_values: {x: 3.0, y: 3.0, z: 3.0} rgbd_params: {depth_scale: 1000.0, depth_trunc: 1.5, convert_rgb_to_intensity: false}"

in the demo image

it set rosservice call /start_reconstruction "tracking_frame: 'camera3_depth_optical_frame' relative_frame: 'base_link' translation_distance: 0.0 rotational_distance: 0.0 live: true tsdf_params: voxel_length: 0.01 sdf_trunc: 0.02 min_box_values: {x: -3.0, y: -3.0, z: -3.0} max_box_values: {x: 3.0, y: 3.0, z: 3.0} rgbd_params: {depth_scale: 1000.0, depth_trunc: 1.5, convert_rgb_to_intensity: false}" but the memory usage spent7.2Gbyte image and my realsenseD435 camera pub depth and color image rate is :15hz

marrts commented 1 year ago

Reducing cache_count to 1 might help with the tf lookup issue you’re seeing, but no guarantee.

I’m not sure why you’re seeing such big memory usage. How long are you leaving it running for?

dbdxnuliba commented 1 year ago

by test , I found two key related params about memory usage translation_distance and rotational_distance when I set translation_distance: 0.01 rotational_distance: 0.02 and I move it 3 times in 10 minutes image

but when I set translation_distance: 0.0 rotational_distance: 0.0 meory and 2 minutes after that memroy12 and the depth and color topic rate is 15hz.

so ,we can see that once we set translation_distance and rotational_distance 0.0 ,it memory usage will increase quicky

marrts commented 1 year ago

That makes sense, by setting those values to greater than zero it won't add any data to the stored data unless the camera frame is moved. Setting it to 0 will always add data. I think the main issue you're seeing is coming from here. Every time it adds an image to the mesh it appends to a vector of rgb images, depth images, and TF data. These images are going to be very high memory usage. Every time you call start_reconstruction again all these vectors should be cleared and I'd expect memory usage to go back down. I'd generally recommend to not run this for several minutes, but optionally you can remove this part of the code if you need to run it for a very long time and the memory usage is a problem. You just won't be able to archive the data, which isn't necessary.

dbdxnuliba commented 1 year ago

it works well now! thanks for your detailed reply!