Closed OPradelle closed 4 years ago
Hello, Thanks for asking. It is true that I had several artifacts when meshing the semantic3D dataset. For outdoor datasets, it doesn't matter a lot as most of the picture are taken from far away.
For indoor datasets, with great density variation (as it is the case with lidar datasets), the artifacts would be much more visible. I do not have an easy fix for that. I would recommend to try another meshing algorithm, less sensible to density variation and made for lidar. E.g., if you now the acquisition angles, you can map the points on a sphere, point should be aligned on a grid, you build the triangles from this grid and reproject in the 3D space, discarding triangles too elongated. By doing that, for each point of view the resulting mesh shall be better than the greedy algorithm I used.
Thanks for your answer 👍
Hi,
I wanted to try your method on an indoor dataset : everything went well with the S3DIS dataset (after managing the view generation to be compatible with indoor scenes), but when i try to use it on my own dataset (lidar indoor scene), the mesh produced seems to be noisy when i use smaller voxels, with artifact when i look on the rgb / composite images generated.
I tried different variations of the voxel size parameter and different sampling for the input cloud, but none of them seem to work. I checked the point cloud on a basic viewer and it does not look that noisy.
Have you been confronted to that kind of noise with the semantic3D dataset ? Can I mesh the cloud outside of the project and inject it into the preprocess workflow ? (view generation + image generation)
I add bellow the screenshot + snapshot from a same area (initial cloud, mesh with varying voxel size) : the noise created can be seen on the last picture (bottom left, on the ground or on the table) and produce these white / grey particles.
Thank you for your answer.