marian42 / mesh_to_sdf

Calculate signed distance fields for arbitrary meshes
https://pypi.org/project/mesh-to-sdf/
MIT License
1.03k stars 109 forks source link

Process an entire scene #6

Closed eriksandstroem closed 4 years ago

eriksandstroem commented 4 years ago

Dear Marian, Thank you for making this project public! Let me get straight to the point. My goal is to take .ply files of scenes that depict rooms and apartments. They are from the replica dataset. You can find more info here.

A typical scene looks like below.

image

Question: Is there any hope that I can construct an SDF from these kinds of scenes?

I can partly answer my question above and the answer is that I can, but that I am not happy with the result. Let me show you what I mean and then perhaps you can give me some feedback if there are any settings that allow me to produce better SDF reconstructions.

First of all, I tried using the surface_point_method = 'scan', but this was not successful at all. The outer geometry is reconstructed well as seen below (with the exception of some artifacts)

image

but no inner geometry is reconstructed truthfully as seen below.

image

I believe that this is because the depth maps that are scanned are all from the outside of the scene and no depth maps are collected from the inside of the room. I used resolution 64x64x64 for the reconstruction above.

For all reconstructions below I use the resolution 256x256x256. When I use the surface_point_method = 'sample' argument, I can reconstruct the interior of the room, but quite large artifacts remain as seen below. Both outside of the room

image

And inside the room

image

In comparison, the flower looks like this in "reality".

image

I have tried changing the number of points that I sample from the mesh. This affects the result, but I cannot get rid of the artifacts to any reasonable extent.

For your information, this is the result when I run the "sample_sdf_near_surface"-function.

image

The thing is that all points falling outside of the room is given a positive sdf value (just as if the camera was outside of the room) and points inside the room are given blue values. I believe some points inside the room might be given red values, but it is a bit difficult to see. It is, however, clear that most points inside the room are blue, which is not what I would expect. I would instead want the camera inside the room and the points in free space to be red and the points e.g. inside the table to be blue. I think you understand my problem.

To give you some more insight, here is what the mesh of e.g. the chair looks like when you zoom in close, so it is clear to me that it seems to be watertight and a volume is contained inside the chair.

image

I would be very grateful for any insights you might have on how I can tune the parameters better.

Cheers, Erik

marian42 commented 4 years ago

For the sample method, the mesh needs to be watertight. For the scan method, it doesn't need to be watertight, however, it will not capture surfaces that aren't visible to the virtual cameras. It looks like these rooms are surrounded by walls, so the scan method would only see the walls as the cameras are placed outisde the mesh. This could possibly be addressed by placing the cameras inside the room.

I'm currently downloading the dataset, but I don't know how big it is or how long it takes to download. Do you know if there is a way to download a single sample? I'll report back after I've had a try with a mesh from this dataset.

eriksandstroem commented 4 years ago

Thank you very much for taking the time to check this. I cannot express how grateful I am! I will then make my mesh watertight first and then see how well the sampling method works, but it would be very interesting to see how well the method could work if the cameras are placed inside the room.

I would be happy to send you just this file so you don't have to download the entire dataset! I will send it to your email in a bit - given that your email is the one on your website?

Cheers,

marian42 commented 4 years ago

The email is correct, I got it. This project should be able to handle non-watertight meshes, so I want to figure out if I can make it work with these particular rooms.

~If manually editing the meshes is feasible (to make them watertight and have oriented normals), this should also allow you to get good SDFs.~

eriksandstroem commented 4 years ago

Hi Marian, I made the meshes watertight by using poisson reconstruction in cloudcompare - see this link on how to do it: https://www.cloudcompare.org/doc/wiki/index.php?title=Poisson_SurfaceReconstruction(plugin)

After that, I could use your software and it gives me much better results than before (when using the sampling method). Here is an example screen shot of the flower again. image

It would, nonetheless, be interesting to find out if the results can be improved even more and if it is possible to use your scanning method.

Cheers,

marian42 commented 4 years ago

Just a quick update, the model seems to be watertight already. The artifacts are a problem inherent to the sampling-based approach, I'm not really sure what can be done about it (other than using the scanning method). Edit: In your case, it seems like smoothing the mesh a little bit did the trick. If you get these artifacts with the "sample" method, it might be worth playing with the normal_sample_count parameter with values from 1 to 20.

I tried placing the virtual cameras inside the room and this seems to work. The tricky part will be figuring out the camera positions so that all the surfaces are covered. It will take some changes to the codebase to allow setting arbitrary camera positions. I'm a bit short on time right now, I'll let you know once I get to implement that.

eriksandstroem commented 4 years ago

Thanks a lot! Some questions:

  1. How did you manage to convince yourself that the model is already watertight? From what I see when I open the model in meshlab - the mesh does not seem watertight at all. See for instance the flower:

image

  1. Next, I want to ask you what benefit the "scanning" method has over the "sampling" method. I understand that the scanning method can handle non-watertight meshes, but is the only benefit? The purpose of taking virtual scans with the "scanning" method is simply so that you can sample points on the surface, but you already have points on the surface directly from the point cloud so why bother with the virtual scans? I have not yet checked the implementation, so perhaps I will find the answer when I do that, but it would help me understand the sampling method better if you can provide an answer.

  2. I tried increasing the normal_sample_count parameter to 19. I still observe two kinds of artifacts that I want to get rid of. The first one looks as follows:

image

This artifact is outside of the room and it comes from the fact that the poisson reconstructed mesh has a tiny closed mesh just at that location. See the black dot in the image below:

image

My hope is that I can pre-process the mesh a bit more before using your function so that such mesh artifacts in the input are removed. Edit: I post-processed the poisson mesh in Meshlab by removing faces that are isolated (this is a built-in filter) and then I was able to get rid of those artifacts.

The second artifacts are looking like lines across the entire mesh. This is specifically visible as vertical lines on the walls, but one can also spot them on the objects. See e.g. the pot of the flower.

image

I am not sure what causes these artifacts - perhaps the voxel grid is not aligned with the mesh? Do you have any insights? Edit: I find that these artifacts or lines most likely are already in the input mesh - they are not visible in the mesh, but I discovered that there is a higher density of points on the mesh where we see the line artifacts in the final TSDF.

Cheers,

marian42 commented 4 years ago
  1. You're right! The flower is not watertight. The rest of the mesh looked pretty watertight. Sorry!
  2. The difference between the two methods is how they determine the sign. The sampling method samples random points on the surface and determines the sign for a query point by taking the closest of these sample points (or the closest N points). For meshes with pointy parts and for meshes that contain small triangles that point in a different direction as the surrounding surface, there is a small chance that one sample point with a "bad" normal "stands out" a bit, resulting in a cone shaped area around it with incorrect SDF signs. This is what you see in your second screenshot. To avoid this problem, I implemented the depth buffer based method where the sign is determined by using the depth buffers of the scans. It isn't quite as accurate (noisy SDF surfaces) but it avoids these bigger blob artifacts. It only works if the surfaces you're interested in are seen by the cameras.
  3. Could it be that the normals of that tiny part are inverted? If so, the artifact would be expected behaviour as the sampling method considers outside to be where the normals point.

I'm still working on placing the cameras inside the room (it's really not difficult, sorry it's taking so long).

eriksandstroem commented 4 years ago

Thanks for your prompt response.

  1. You are right, the normals of that tiny part are pointing inwards.

Thanks for still looking into the depth implementation of placing the cameras inside the room. I do, however, feel like it is not trivial - how can you make sure that you place the cameras inside the room and only in free space inside the room i.e. not inside objects? And how can you make sure that you observe all geometry?

Thanks!

marian42 commented 4 years ago

how can you make sure that you place the cameras inside the room and only in free space inside the room i.e. not inside objects? And how can you make sure that you observe all geometry?

I did it by manually selecting the camera positions. Of course, this isn't trivial to automate if you want to process the entire dataset.

By placing the cameras inside the room, I manged to get this voxel volume:

inside

However, due to holes in the mesh, some of the cameras can see the outside, resulting in these ray artifacts:

artifacts

I'm afraid that I don't really know a way to make this model work. If you want to play with the code to place cameras inside the screen, here it is. You need the latest master of this repository.

import numpy as np
from mesh_to_sdf import *
from mesh_to_sdf.surface_point_cloud import *
from mesh_to_sdf.scan import *
import skimage
from skimage import measure
from mesh_to_sdf.utils import scale_to_unit_cube
import trimesh

scan_count = 100

mesh = trimesh.load('example/ground_truth_mesh.ply')
mesh = scale_to_unit_cube(mesh)

scans = []
N = 40
resolution = 400

for i in range(N):
    a = 2.0 * math.pi * i / N
    on_circle = np.array((math.sin(a), math.cos(a), 0))

    pos = on_circle * 0.7 +  np.array((0, 0, 0.47))
    dir = np.array((0, 0, -1))
    camera_transform = get_camera_transform(pos, dir)
    scans.append(Scan(mesh,
        camera_transform=camera_transform,
        resolution=resolution,
        calculate_normals=True,
        fov=1.4,
        z_near=0.1,
        z_far=10
    ))

    pos = on_circle * 0.6 + np.array((0, 0, 0))
    camera_transform = get_camera_transform(pos, -on_circle + np.array((0, 0, -0.4)))
    scans.append(Scan(mesh,
        camera_transform=camera_transform,
        resolution=resolution,
        calculate_normals=True,
        fov=1.4
    ))

    pos = on_circle * 0.6 + np.array((0, 0, -0.2))
    camera_transform = get_camera_transform(pos, -on_circle + np.array((0, 0, 0.5)))
    scans.append(Scan(mesh,
        camera_transform=camera_transform,
        resolution=resolution,
        calculate_normals=True,
        fov=1.4
    ))

dir = np.array((0, 1, 0.5))
pos = np.array((0, -0.64, -0.54))
camera_transform = get_camera_transform(pos, dir)
scans.append(Scan(mesh,
    camera_transform=camera_transform,
    resolution=resolution,
    calculate_normals=True,
    fov=1.4
))

for i, scan in enumerate(scans):
    scan.save("test/scan_{:d}.png".format(i))

cloud = SurfacePointCloud(mesh, 
    points=np.concatenate([scan.points for scan in scans], axis=0),
    normals=np.concatenate([scan.normals for scan in scans], axis=0),
    scans=scans
)

voxels = cloud.get_voxels(256, use_depth_buffer=True)

vertices, faces, normals, _ = skimage.measure.marching_cubes_lewiner(voxels, level=0)
mesh = trimesh.Trimesh(vertices=vertices, faces=faces, vertex_normals=normals)
mesh.show()
eriksandstroem commented 4 years ago

Thanks for your efforts. I am, however, pleased with the results when I first make the mesh watertight (poisson reconstruction) and then filter out isolated components. The sample method works fine then.

Thanks again for making the library!