nmwsharp / diffusion-net

Pytorch implementation of DiffusionNet for fast and robust learning on 3D surfaces like meshes or point clouds.
https://arxiv.org/abs/2012.00888
MIT License
398 stars 50 forks source link

Remeshing using DiffusionNet #19

Open GiBucci opened 2 years ago

GiBucci commented 2 years ago

Hi!

First of all, thank you for sharing DiffusionNet with us, great work!

I am trying to use DiffusionNet to remesh 3d models (or point clouds) in order to have a mesh topology that is identical to the one in my template model.

Your example on sampling invariance seemed exactly what I needed, but I can't make it work. The net is trained and working fine, but the prediction is not even close to what I need.

I understand that a net won't be perfect, but I expected an output close to a 1:1 correspondance, but in reality a small portion of the points is assigned. This means that even with heuristics I cannot approximate the position of the new points that aren't linked by the net prediction.

Can you please give me a peek on how did you solve this problem? Is it a training problem? Because I am trying different training configurations but nothing seems to change.

Or am I completely misunderstanding the usage?

Any hint on this problem is appreciated, and thank you again for your work and your consideration.

Have a nice day.

Giovanni

gattia commented 2 years ago

A random passerby here. Did you try the methods from functional correspondence to do this? The link to the example is attached below.

https://github.com/nmwsharp/diffusion-net/tree/master/experiments/functional_correspondence

GiBucci commented 2 years ago

@gattia don't worry, any help is appreciated!

I am trying in this very moment, but I already see I'll probably encounter the same problem.

Both the methods from sampling invariance and functional corresponence return, as an output, the correspondence between the target vertices and the original vertices.

This means that even if I can create some sort of association or even downsampling, it happens that maybe vertex 1 that was, let's say, on the shoulder of the original model, is associated with a vertex on the leg of the target mesh.

In this way, when I try and exploit the original faces, the result is a mess of faces crossing the body without any vague trace of the original topology.

At this point I still don't understand if I misunderstood completely the sake of the network, or I didn't explain correctly my goal.

If you have any idea on how to solve this kind of weird association, please let me know.

Feel free to also tell me that I got it completely wrong and I'm trying to do something unfeasible, but at least I will know I am on the wrong path.

Thank you again in the meantime, and have a nice day

nmwsharp commented 2 years ago

Hi! Glad to hear you're interested in this method!

I'm not sure I totally understand the setup for your problem, is this a correct understanding? You have a template mesh, and many similar-but-different meshes/point clouds which you would like to "remesh" to have the same mesh connectivity on as the template. Presumably, you also have some examples of "ground-truth" inputs which are already aligned to the template. And you would like to train a DiffusionNet to find a correspondence between these new surfaces and the template, then use that correspondence to position the vertices of the template mesh to align with these inputs?

If so, DiffusionNet is indeed a quite strong method for correspondence problems like that, though you may need to be careful how you set up the problem. If learning is succeeding, you should be able to get maps which mostly put the vertices "near" the desired location. However, actually getting a smooth mesh as output is a bit harder, because simply matching up each template vertex to the "best" vertex each shape will not be smooth, and will map several vertices to the same place, etc---this is asking for more than just an accurate correspondence.

(Also, as a side note, functional maps is generally a more powerful technique than vertex-labelling correspondence, so I would definitely encourage looking in to that.)

If you want to push in the direction of actually getting smooth pointwise maps to deform a template, most importantly you will need to generate maps to arbitrary locations on the target surface, rather than just to vertices. You may want to look at "sub-vertex" accurate correspondences, or applying some kind of local refinement or smoothing. In our functional maps examples in the paper, I believe we applied the ZoomOut technique to refine maps in Table 4, although it is not implemented in this repo. Hopefully that's a useful initial pointer!

GiBucci commented 2 years ago

Hi @nmwsharp, thank you so much for your consideration as well.

Yes, you framed the problem correctly. That's exactly what I am trying to do. The only problem is that, even testing on FAUST with the same setup as the paper, and obtaining basically the same scores, the points are not even close to the desired location. I already considered some further operations, i.e. how to manage vertices assigned to the same place, and vertices that are not assigned at all, but it is all useless if even the points associated are not even close to the expected position.

I don't expect a perfect or smooth result, but I hoped it would at least preserve a human shape, which is not the case.

I am fairly new to 3D world, so I thought vertex correspondence would be the best strategy. For example, I have no idea what you mean when you speak about mapping to arbitrary locations, and that's why your answer is important for me.

Now I know there are other kinds of mapping, and as also @gattia suggested, I am having a better look at functional maps, and how to use them for this purpose.

I'll also explore the ZoomOut technique.

I believe this is a great pointer, and exactly what I looked for with my questions.

If any other suggestion comes to your mind, I'll accept it gladly, otherwise I think I have bothered you enough, and we can consider this "issue" closed.

Again, many thanks to the both of you, and have a nice day.

gattia commented 2 years ago

@GiBucci I implemented an algorithm called 'FOCUSR' by Lombaert et al. http://cim.mcgill.ca/~lombaert/PAMI-FOCUSR.pdf - my python version is here: https://github.com/gattia/pyfocusr and it does something similar to what you are talking about. It might be useful. Though, there are newer versions of that algorithm that performed better.

However, I do think DiffusionNet is probably better suited. One thing that might be of interest from that implementation is a final post-processing step to get smooth connectivity after the general registration. Essentially Figure 5. in the above paper describes mean filtering using the graph structure to impose spatial regularity. This is done here and the actual logic for the mean filtering is here.

Im sure Nick would have a better opinion on whether or not this is a good post-processing step.

GiBucci commented 2 years ago

@gattia thank you for this piece of info, I'll look into it as soon as possible. I still have a lot to learn about 3D world, and it is very hard to find helpful material, since everything is often too theoretic or too specific.

@nmwsharp just another question for you: as I said I was trying with vertex labelling as a first attempt, the only problem is that while, as I said, in evaluation I obtain scores similar to the ones described in the paper, when I try to apply the model to the same point cloud in the test set, it gives me a geodesic error that is ten times greater than the mean error in evaluation. Is it feasible? Can it happen? Or is there something wrong with the way I predict the labels? I am pretty sure I am using the same labelling method.

fire commented 2 years ago

Having beginner's trouble. I'm looking for a type of remesh by outputting the same mesh with only manifolds.

The goal is being able to use the universe of amazing manifold only 3d processors.

data_image

nmwsharp commented 2 years ago

@GiBucci Glad to hear some of these pointers are useful, and I hope your journey in to 3D algorithms is a fun and fruitful one!

If you are seeing different errors in evaluation than in training on the same data, something funky must be going on, but I'm not quite sure what. One possible culprit is normalization: in the DiffusionNet data loaders we always scale shapes to be centered at the origin and scaled to roughly occupy the unit-radius sphere. This is important, because some parameters (like diffusion time) implicitly encode a notion of scale, and will not have the same effect when applied at different scales. XYZ coordinate features are also potentially sensitive to translation & rotation. Perhaps there is some normalization difference when you try your inputs as test data?

nmwsharp commented 2 years ago

@fire if you are interested in general remeshing tools to produce manifold outputs, perhaps jumping straight to a "fancy" learning-based approach like DiffusionNet may be more trouble than it's worth for you. The learning-based approaches really excel when you have a dataset of "good" meshes which you are hoping to mimic, but even then there are a lot of details to get right :)

Perhaps check out this library, I have heard good things about it: https://github.com/hjwdzh/ManifoldPlus

fire commented 2 years ago

Here's a summary of the various options I reviewed.

Meshoptimizer is a mesh decimator. It works great, I proposed putting it into godot engine (game engine).

It's not licensed to use ManifoldPlus for commercial purposes. The previous manifold software didn't have the better results.

I was looking at quadwild https://github.com/nicopietroni/quadwild but the math solver is 10,000+ USD proprietary software according to the internet even though the code is GPL licensed.

https://github.com/wjakob/instant-meshes is a good application.

Blender has a method of removing non-manifolds in its 3d printing toolkit add-on. I was able to get satisfactory results except for stuff like a ribbon like loop geometry.

The various game engine uses 1. simplygon (commercial) 2. instalod (commercial) 3. voxelizing using OpenVDB (opensource) and then remeshing. Openvdb even allows vertex attribute transfer like uvs. https://www.sidefx.com/docs/houdini/nodes/sop/remesh.html

https://github.com/gradientspace/geometry3Sharp had great results for remeshing, but it was in c# rather than c++.

nmwsharp commented 2 years ago

@fire Ah excellent, you're way ahead of me, that's a solid list!

Instant-meshes is awesome, but one thing to be aware of is that it does not guarantee manifold outputs, and there is no easy way to force it to produce manifold outputs, if that is what you are seeking.

Another you might check out is the good old standard MeshLab. It has a huge suite of geometry processing & remeshing algorithms, and also allows them to be run programatically in Python https://pymeshlab.readthedocs.io/en/latest/ (there's also a separate older scripting system).