ros-industrial / industrial_reconstruction

Tools for surface reconstruction from 2D depth images
Apache License 2.0
49 stars 17 forks source link

Improve reconstructed mesh quality #1

Closed aashish-tud closed 1 year ago

aashish-tud commented 2 years ago

Hello,

I want to reconstruct a curved surfaces with a Zivid one+ camera and a robot arm. To achieve this I have these steps:

The generated mesh looks like this:

Screen Shot 2022-09-09 at 10 14 57

Another mesh created with the camera kept in one fixed location for more than 50 frames. It is supposed to be a white sphere on a table.

Screen Shot 2022-09-09 at 10 06 52

It quality of the reconstructed mesh looks like it can be improved, especially since we are using a high resolution camera. What can I try to improve the quality of the mesh?

Some specifics:

aashish-tud commented 2 years ago

@gavanderhoorn has been helping me so far with this.

marrts commented 2 years ago

I haven't yet used a camera that needs to be stationary to get 3D data with this pipeline, but in general I would expect the process to still work just fine. One thing to note though is that it is a good idea to ensure that the camera is well calibrated on the end effector.

From the example usage one thing I would change is the min_box_values and the max_box_values:

  min_box_values: {x: 0.0, y: 0.0, z: 0.0}
  max_box_values: {x: 0.0, y: 0.0, z: 0.0}

This essentially specifies the cropped area of the mesh that will be retained, but if it is all zeros you retain the entirety of the mesh. This is useful to try and get more information about what is going on with the mesh being generated.

Another factor I would look at is the depth_trunc Make sure you have this value set to be sufficiently higher than your actual distance to the part. If this is too low then you could potentially be adding very little data to the mesh when collecting images.

I don't know much about this camera, so it is possible that the depth_scale should be a value other than 1000, but I think it would be really obvious if the scale was wrong.

I would also try setting live to false, to make sure that you are actually processing all the images that come through. It is possible that the system is processing the data too slowly and you are losing a lot of information. This will cause the reconstruction to take longer since all the calculations will be performed after stop is called, but it will use all the data for sure.

Another mesh created with the camera kept in one fixed location for more than 50 frames. It is supposed to be a white sphere on a table.

In general I wouldn't expect that great of results from keeping the camera fixed in one place. The performance will likely be much better from moving around, although your mesh looks worse than I would have expected.

Finally you can play around with the voxel_length and sdf_trunc parameters. In genera,l a smaller voxel_length will lead to higher resolution meshes and sdf_trunc should be about 2-4 times voxel_length. Smaller voxel_lengths also take longer to process. The sdf_trunc parameter essentially controls the connectedness of the mesh. A higher value will try to connect further apart triangles, so too high and you will generate a blob, but too low and you will have a mesh with a lot of holes.

One final tip I have is to archive the data when calling stop and then following the instructions to rerun this archived data to quickly iterate over trying different parameters. The archiver stores the raw color and depth images along with all the pose information, so you don't lose any data no matter what your parameters are set to when collecting the data. This will also be useful to confirm that your image conversions are behaving as expected.

aashish-tud commented 2 years ago

Hello,

Thanks for your reply. I think I have improved the quality of my mesh a little bit. It is curved surface made of titanium. Although there is still progress to be made I think this is capturing all the information that is present in the depth image. The areas not reconstructed are dark in the depth image, so makes sense they are not in the generated mesh.

Here is how it looks now: fanblade_1

fan_blade_2

Here is what I did differently from last time:

rosservice call /stop_reconstruction "archive_directory: 'some_path'
mesh_filepath: 'some_mesh.ply'
min_num_faces: 10"

With these values I can consistently generate meshes. when setting filters in the stop reconstruction service, sometimes no mesh is generated.

I will be trying to minimize the dark spots in the depth image to improve reconstruction.

marrts commented 2 years ago

It is curved surface made of titanium.

I would expect curved reflective surfaces to be pretty hard to work with in general, definitely needs good tuning for all of the parameters and calibration. In my experience trying to scan a highly reflective surface I actually had to go back and forth over it from many different angles to get the best results. If you're getting multiple layers from different passes then I would expect likely either your depth data is being reported inaccurately or your camera location needs to be better calibrated.

Changed the tracking_frame to the optical frame of my camera, previously it was the frame where the camera is connected to the robot

Forgot to mention this one, perhaps I should consider renaming tracking_frame to camera_frame.

Your sdf_trunc value seems a little high compared to your voxel_length, it still could potentially work well, but if you're noticing problems that would be my suggestion for changing.

With these values I can consistently generate meshes. when setting filters in the stop reconstruction service, sometimes no mesh is generated.

The filters are probably a little finicky still, I wouldn't worry about them too much. The most useful one would still probably be the box filter that would be called in the start_reconstruction call. If you want additional help with visualizing it, whenever the reconstruction starts it'll publish the box you defined to a marker msg on the topic /tsdf_volume.

I will be trying to minimize the dark spots in the depth image to improve reconstruction.

Not sure if you're referring to the depth or color image, but one thing to note is that I believe the Open3D algorithm uses color in addition to depth data to try and match frames together, so consistent lighting can also help.

aashish-tud commented 2 years ago

I would expect curved reflective surfaces to be pretty hard to work with in general, definitely needs good tuning for all of the parameters and calibration.

Just the parameters in the service calls right?

If you're getting multiple layers from different passes then I would expect likely either your depth data is being reported inaccurately or your camera location needs to be better calibrated.

I will check this, I also think the robot could be moving a little bit if it is not firmly bolted on the table top.

Your sdf_trunc value seems a little high compared to your voxel_length, it still could potentially work well, but if you're noticing problems that would be my suggestion for changing.

I will try lowering it aswell. I think I raised it to be a little higher since no mesh was being generated. I am not sure it was because of this but I will try lowering it.

Not sure if you're referring to the depth or color image, but one thing to note is that I believe the Open3D algorithm uses color in addition to depth data to try and match frames together, so consistent lighting can also help.

I was referring to the depth image. Due to the curved surface I was unable to capture some parts of the object in the depth image. However, I think by moving around the robot arm enough I should be able to capture most of it.

marrts commented 2 years ago

Just the parameters in the service calls right?

Yes

I also think the robot could be moving a little bit if it is not firmly bolted on the table top.

That could definitely cause significant error

marrts commented 1 year ago

Closing this issue since there hasn't been any recent follow up questions.