puzzlepaint / surfelmeshing

Real-time surfel-based mesh reconstruction from RGB-D video.
BSD 3-Clause "New" or "Revised" License
420 stars 83 forks source link

On the mesh quality of the reconstruction (manifoldness, self-intersections) #2

Open zhangxaochen opened 5 years ago

zhangxaochen commented 5 years ago

Dear Thomas,

Recently I'm reading your paper entitled "SurfelMeshing", and thank you so much for sharing your source code to us.

I'm particularly attracted by the ability of "reconstructing thin objects" of this algorithm, and have tested on some RGB-D sequences of my own. Below is the snapshots of the reconstruction results on my scanning of a Spinosaurus toy model (left: volumetric method, right: SurfelMeshing):

Ours SurfelMeshing
Ours SurfelMeshing

As can be seen from the snapshots, the results of SurfelMeshing is non-manifold, and contain self-intersections in the triangle meshes. This result might not be quite ideal for me...

I have tried fine-tuning some parameters in the algorithm, such as: --regularizer_weight 30 and --depth_erosion_radius 3, but non of these settings could eliminate the artifacts totally on the final mesh output.

Currently my test is on Windows with VS2015 (code porting of myself), and the above sample test sequence Spinosaurus is from our CU3D dataset. Could you please help me figure out if I'm running your code with incorrect settings? or if this is a inherent feature of this algorithm? Is there any way that I can improve this based on this algorithm?

puzzlepaint commented 5 years ago

Hi,

The non-manifoldness and self-intersections you are observing are somewhat inherent to the algorithm, and also stated in our paper (see e.g. Tab. 1 or Fig. 18). Notice that the algorithm was mainly intended to be able to adapt to new measurements quickly during the scanning process, rather than as an offline algorithm to produce high-quality results. So, I don't think that a perfect result could be achieved by tuning the settings.

That being said, the issues could theoretically be reduced either by parameter tuning, or by improving the quality of the input data, if that is possible. For example, in case you used a Kinect v1 camera for scanning, the depth images are likely distorted due to the camera's internal calibration, which can be calibrated to get better results in case you haven't done that yet (for example with an approach such as this one). Similarly, improvements in camera localization should also help. Both should reduce the noise in the reconstructed surfel cloud, which makes manifold triangulations more likely.

However, if you absolutely require a manifold mesh, the SurfelMeshing algorithm is likely not suitable, since it can't give a guarantee (unless one could ensure that the reconstructed surfel cloud has nice properties that will always lead to a manifold triangulation, which unfortunately seems unrealistic in practice). Volume-based methods on the other hand (using either voxels, a tetrahedralization, or similar) can ensure manifoldness, but may have a hard time preserving small volumes of thin objects. I guess that it is not easy to develop an algorithm that both reconstructs manifold non-intersecting meshes and thin objects. The ability of SurfelMeshing to reconstruct thin objects is more or less a result of the fact that it allows self-intersections: by allowing the front and back sides of such objects to overlap slightly, it gets around the issue that these measurements would likely cancel out if no self-intersections were allowed.

zhangxaochen commented 5 years ago

@puzzlepaint Gotcha~ Thank you very much for your kindly explanation 👍

RuibinMa commented 5 years ago

To follow up on this discussion, I'm also trying to use surfelmeshing to fuse a sequence of depth maps. However, because of the slight inconsistency between successive depth maps, I got a quite noisy mesh at the regions that are not overlapped perfectly. Do you have any suggestions like which parameters to adjust? Thanks in advance!

puzzlepaint commented 5 years ago

I would start with those parameters:

RuibinMa commented 5 years ago

Thank you!

pavan4 commented 4 years ago

I am trying this with a custom capture of RGBD data (not with kinectv1). I converted the capture into TUM dataset format and ran surfel meshing but the mesh generated is very sparse and was wondering what is the best way to tune it to the right paramerters.

I will attach an example here:

I used the following parameters

./build/applications/surfel_meshing/SurfelMeshing <mydataset> groundtruth.txt 
--follow_input_camera false 
--depth_scaling 5000 
--export_mesh table.obj 
--max_depth 3 <tried 5>
--sensor_noise_factor 0.8 <increased>
--normal_compatibility_threshold_deg 90 <increased>
--measurement_blending_radius 20 <increased>
--depth_valid_region_radius 500 <increased>
--observation_angle_threshold_deg 75 <default>
--depth_erosion_radius 2 <default>
--outlier_filtering_frame_count 2 <reduced as I have low fewquecy of frames>
--bilateral_filter_sigma_xy 5 <increased>
--bilateral_filter_radius_factor 5 <increased>
--bilateral_filter_sigma_depth_factor 0.05 <default>
--outlier_filtering_depth_tolerance_factor 0.05 <increased>
--pyramid_level 1 
--regularizer_weight 30 <tried 10>

Screengrab of a video frame test

Screengrab of mesh generated

Screen Shot 2019-07-09 at 10 52 42 PM

Can anyone point out any obvious mistakes, if any? Any recommendations to fine tune the mesh generation on this custom dataset ?

puzzlepaint commented 4 years ago

It is hard for me to recognize anything in the mesh output. It even looks a bit like what I would expect when using low-frequency input with wrong camera poses or a wrong depth scaling (or the timestamps don't match such that the program wrongly interpolates between some given poses). Maybe it makes sense to first check whether the surfel reconstruction works before looking at the mesh reconstruction. In the 3D view in the SurfelMeshing program, pressing H and S should disable the mesh view and activate the surfel view. Or --export_point_cloud can be used to save a point cloud.

Briefly looking over the parameters, --sensor_noise_factor 0.8 seems extremely high. This would imply an error range extent for the depth measurements of 80% of their depth. Something in the order of 0.1 or less seems more plausible for typical depth cameras.

Judging from the mesh screenshot, it seems that the reconstruction is somewhat incomplete. Perhaps it would be worth trying to set --outlier_filtering_required_inliers 0 to see whether the outlier filtering removes too much. Or trying to increase --bilateral_filter_sigma_depth_factor to apply stronger smoothing to the depth images in case they are too noisy. If that does not help, I would use --debug_depth_preprocessing to see whether this reveals any specific preprocessing step that removes too much data.

liujiacheng1009 commented 4 years ago

I am a beginner, and thanks for your sharing .can you recommend some algorithm to produce high-quality results of thin pipes, i am trapped by the problem 。I use RGBD data,too。thank you~@puzzlepaint

puzzlepaint commented 4 years ago

I don't have much experience with offline algorithms for surface reconstruction. However, I think that the well-known Poisson Surface Reconstruction works very well in case the input data is good. For example, I believe that the reconstructions shown on this page have been made with it (from laserscan data). I would probably try to use BAD SLAM to get a good surfel reconstruction of the RGB-D video, then run Poisson Surface Reconstruction, and see whether that is good enough.

shicuiweia commented 3 years ago

@zhangxaochen 同学,我最近在研究这个算法,能否交流一下呢,我微信c937688576,谢谢了