Closed mariuszhermansdorfer closed 1 year ago
Thanks for your question.
"Remeshing" is something that is not very accurately defined, so every library implements it in a different way. In MeshLib we have at least two "remesh"s:
One that is made by converting a mesh into voxel representation and backward:
offsetMesh( *obj->mesh(), 0.0f, { .type = OffsetParameters::Type::Offset } );
It is rather slow but can cure the most problematic meshes.
Another alternative is faster as in your question via remesh
function. This functions actually is a sequential invocation of
2 * settings.targetEdgeLen
with local re-triangulation that improves Delaunay properties, andsettings.targetEdgeLen / 2
.
So it is expected that the edges having lengths in between settings.targetEdgeLen / 2
and 2 * settings.targetEdgeLen
can remain unchanged during remeshing, the the most important for us was to eliminate too small and too large triangles.I think we can add more parameters in RemeshSettings
to control these values. Or you can call directly subdivision and decimation not waiting for MeshLib change. And please let us know if it does not work for you and you expect something more from remeshing.
Thanks for your comment @Fedr.
What I’m looking for (and what I think the other libraries are doing) is a relaxation step in between. Essentially, vertices are allowed to slide along the edges to iteratively reach a uniform target edge length.
Do you think such functionality could be added?
I think yes, we will add an optional relaxation step. And it seems that it would be better to add it at the very end to guarantee that uniform edge lengths are preserved.
We have added in RemeshSettings
two new options:
edgeLenUniformity
. Default value is 0.5, and if it is increased and approaches 1.0, remesh will more aggressively subdivide edges longer than targetEdgeLen
and eliminate edges shorter than targetEdgeLen
.finalRelaxIters
. By default it is 0, but if you set it to 1 or 2, it will apply relaxation to the inner vertices in the region, thus improving uniformity as well.Thanks @Fedr!
The edgeLenUniformity
setting does make a difference. It still seems to only work for subdivision, and doesn't really decimate the mesh with longer targetEdgeLen
settings.
https://github.com/MeshInspector/MeshLib/assets/49192999/61e2bc83-1727-4040-bd4f-5fb7ca9aefa4
Also, it would be nice to have some automatic option to detect and protect all boundaries and hard edges (with dihedral angle > than user input). Otherwise the relaxing step distorts the mesh quite heavily.
Indeed, the boundaries are moved currently during the relaxation if RemeshSettings::region
is not set. We will fix it.
And we will think how to protect hard edges. Most probably by reusing RemeshSettings::notFlippable
.
remesh()
was updated in master branch: all boundary vertices and the vertices incident to RemeshSettings::notFlippable
are protected from moving during relaxation.
One can use Mesh
's method
// finds all mesh edges where dihedral angle is distinct from planar PI angle on at least given value
[[nodiscard]] MRMESH_API UndirectedEdgeBitSet findCreaseEdges( float angleFromPlanar ) const;
to initialize RemeshSettings::notFlippable
Thanks for your prompt updates! Now boundary vertices seem to hold up much better during the relaxation step. Do notice, however, that the boundary gets degraded with increased edgeLenUniformity
values:
https://github.com/MeshInspector/MeshLib/assets/49192999/d5519716-0fb4-4381-be71-5515c3762a4a
Also, attempts to decimate the mesh by setting higher targetEdgeLen
still don't work. Only a few edges get collapsed but the edge length is way shorter than the target length:
https://github.com/MeshInspector/MeshLib/assets/49192999/2eae9fe2-c884-4819-99a8-c1ce9ada1720
I see. Probably both issues come from recently added
if ( settings.edgeLenUniformity > 0.5f )
decs.stabilizer = settings.targetEdgeLen; // this increases uniformity of vertices appeared after edge collapse
lines in remesh()
.
I will verify it and fix tomorrow.
Indeed, after removing these lines the issue from your first video disappears. Please verify, the change is already in master branch.
And I was unable to reproduce the second issue (high targetEdgeLen
is ignored). If it is still present, please send us the mesh and exact settings.
I can confirm that the latest commit fixed the corrupt border issue. It now works as expected. The Uniformity
setting is a bit unintuitive, however. It doesn't seem to have any effect <0.5 and increasing the value effectively changes the targetEdgeLen
.
https://github.com/MeshInspector/MeshLib/assets/49192999/dcc551ed-ee54-455b-bea5-148e36fa6a6a
I would expect it to have no effect on targetEdgeLen
and only affect how uniform the mesh is. Also, going above 1.0 shouldn't have any effect either.
As for the other issue, it still persists. Higher targetEdgeLen
values are ignored as seen in the video above. Here is the file:
mesh.zip
I call the function with the following settings:
RemeshSettings settings = RemeshSettings();
settings.targetEdgeLen = targetLength;
settings.edgeLenUniformity = uniformity;
settings.finalRelaxIters = iterations;
settings.packMesh = true;
remesh( *mesh, settings );
Thanks for the explanation. I have just fixed in master two issues:
edgeLenUniformity
is not more 1
, but only now I clamped the value if the user specified a larger value.1
in the code that prevented higher targetEdgeLen
values.Please test how it works for you. If you specify too big targetEdgeLen
value, then the mesh completely disappears. Probably it is worth changing as well.
Thanks for your prompt response!
It seems as if the most recent commit fixed the issue with too high targetEdgeLen
values but a regression with degrading border might have creeped in. It is obvious with higher values, but also visible with lower ones.
https://github.com/MeshInspector/MeshLib/assets/49192999/5e09b8ee-f3c9-4c7e-9785-f930ee92a915
The edgeLenUniformity
is still unintuitive to me. I'd expect the algorithm to remesh geometry for all input values, currently it only kicks-in around 0.3-0.4 and lower values don't do anything. I'd also expect higher values to generate more uniform meshes, currently 0.5 seems to be the most uniform:
Uniformity: 0.5, Target: 1.6
Uniformity: 1.0, Target: 1.6
In the above tests, the resulting edge lengths are much closer to the target value with Uniformity set to 0.5.
The border of the mesh can change during remeshing to allow increasing the desired length of boundary edges (previously it was not so visible due to up limit on the edge length). I think, it can be made configurable.
And as to uniformity, it is a more tricky thing. It is easy to create absolutely uniform mesh in 2D. But in 3D the pursue of uniformity can come only with the deviation from the original mesh, which our algorithm tries to avoid. If high deviation is not an issue for you, we can increase uniformity.
My expectations based on experience with other libraries would be as follows:
Have a Boolean flag defining whether the border should be preserved. If set, no vertex laying on the border should be allowed to move or be removed (unless it doesn’t affect the border shape i.e. vertices lay on a straight line)
Uniformity close to 0 means that a mesh is being remeshed trying to approximate the target value but the resulting length of individual edges deviates significantly. The closer to 1 this setting gets, the closer individual edge lengths are to the target value and the smaller the deviation between them. Increasing the amount of relaxation steps should help reach better uniformity too.
All edges not defined in remeshsettings::notFlippable
should be allowed to be altered and/or removed to aim for higher uniformity in the resulting mesh. Border would be handled by the above mentioned Boolean flag.
FYI, here is one of the better quality results I've seen from a remeshing library:
https://github.com/MeshInspector/MeshLib/assets/49192999/c755568e-0273-440e-a659-bb377c817f5b
Notice, how uniform the triangles are when the Preserve sharp edges
flag is turned off. When on, you can clearly see how the sharp edges and the boundary are being preserved and the remaining still try to maintain high uniformity.
Yes, I see, very reasonable. We are working currently to preserve open mesh boundaries during remesh
and still allowing deletion of some boundary edges.
We have just added new parameter maxBdShift
in RemeshSettings
. It will limit how much mesh boundary can be changed during remeshing.
If one sets
targetEdgeLen=20
edgeLenUniformity=0.5
maxBdShift=0.1
then for your data: (left - input, right - remeshed)
If it is ok with you, we will continue working on uniformity.
Thanks @Fedr, it's looking much better after the recent changes:
https://github.com/MeshInspector/MeshLib/assets/49192999/19b0d9c6-b3c7-459b-8c00-223e1918f7c7
The boundary is definitely preserved now. I understand you still need to tweak uniformity so I won't address this, but the resulting edge length is sometimes quite off from the target. In the below example the marked edge is 10.5 meters while the target value is 8. Increasing the amount of relaxation steps doesn't have any effect here:
In this particular example, we cannot increase edge length any more just by collapsing any edge, because otherwise we will create too degenerate triangles (with too small angles). This setting is in Decimate (maxTriangleAspectRatio = 20
) and it is not exposed yet in Remesh, but I guess nobody wants to get degenerate triangles on output of remeshing. The other two libraries also make edge length smaller when you request very big values, do not they?
Here is a comparison to the native Rhino TriRemesh
which produces the best mesh quality but is relatively slow. This is with preserve border turned on, but preservation of hard edges turned off. You can see how uniform the triangles are and how close to the target value they get:
https://github.com/MeshInspector/MeshLib/assets/49192999/9eac5098-fc84-437f-aaaf-3ca4495cdf7d
To be clear, in the case quoted above, the resulting edges are much longer than the target value. I'd like them to be more subdivided to better approximate the 8.0 meters set as target.
I see. I think if we contract all possible edges longer than target
it can produce much longer edges. And final relaxation can additionally increase their lengths. We will look what is possible to do here.
Here are the first results from new remeshing method.
We hope to give access to it early next week.
And could you please attach the results from Rhino (both with small and large triangles)? It would be interesting to compare it against the original mesh.
This is looking really good! Very nice uniformity and distribution of triangles! I'm looking forward to taking it for a spin.
Attached you will find 6 meshes exported from Rhino. Top-down the target edge length is:
Thanks a lot for the data.
I compared your initial surface with Rhino's result at the maximal edge length (12).
Original:
Remeshed:
The shape of the re-meshed surface and the initial shape are very different, and the difference reachs its maximum at the highest peaks of original "landscape":
It looks like we have to take this trade off to reach real uniformity of the mesh. Previously remeshing in MeshLib tried to preserve the shape of original mesh as much as possible.
Agree that this tradeoff needs to be made to maximize uniformity. I'm hoping, that it will be possible to preserve hard edges with RemeshSettings::notFlippable
though. It will be up to the user to decide whether they want to preserve the border (maxBdShift
) and/or keep the hard edges.
Also, could you please show a comparison with target
set to 0.5? My guess is that it will follow the original much better.
Yes, for target=0.5
the difference is much smaller:
But it is still not zero, since the vertices of refined mesh do not coincide with the peaks and ridges of original mesh:
My understanding is that these edges could be fixed with RemeshSettings::notFlippable
. This way users would be able to control which edges to keep unchanged and which could be allowed to be moved during remeshing.
To provide more context, I'm adding a comparison of remesh results from Rhino. Left with preserve sharp edges ON
, right OFF
. Top-down target edge lengths are:
Perspective view with target 0.5 (red) and original mesh (black)
[EDIT] Here is the result when all original edges (green) were set to be preserved:
Thanks, I see. With preserve sharp edges ON
, Rhino indeed makes close approximation, but the uniformity with high target edge length is lost. We are working on improving uniformity in MeshLib now.
I'm looking forward to seeing the results! Yes, remeshing necessarily comes with a trade-off between preserving the original shape vs. increasing edge uniformity. As a user, I'd like to have control over this process and interactively test various settings to settle on the most appropriate one for a given case.
First results can be seen in the branch remesh/better-uniformity. The parameter edgeLenUniformity
is completely eliminated there.
For large target edge length it does not work very good yet, but for small edges the result is rather good.
Input mesh (notFlippable
edges are shown in magenta):
Remesh result: with the parameters:
targetEdgeLen = 1;
maxBdShift = 0.3;
finalRelaxIters = 10;
This is looking very good! I support the decision of eliminating the edgeLenUniformity
parameter. It's difficult for me to imagine a scenario where I wouldn't like the result to be uniform.
I'm curious how far you can push it with uniformity of longer edges. Reflecting on my above explorations in Rhino, it is quite a challenge to get a uniform mesh with long edges and strong input constraints.
Yes, with long edges, complete uniformity cannot be reached. We expect to get result similar to Rhino.
Now it supports longer target edges. Of course, with too long edges, uniformity is compromised.
Input mesh (notFlippable
edges are shown in magenta):
Remesh result: with the parameters:
targetEdgeLen = 3;
maxBdShift = 0.1;
finalRelaxIters = 10;
Please find it in master branch.
Thanks @Fedr. It works very well with both short & long edges. The results are comparable in uniformity to what I'm getting from Rhino but the algorithm runs much faster. Well done!
There is, however, a bug with certain combinations of targetEdgeLen
& angleFromPlanar
in the findCreaseEdges()
function.
It happens here:
I'm using the original mesh uploaded to this thread earlier with the following settings:
targetEdgeLen = 1.0;
maxBdShift = 0.0;
finalRelaxIters = 10;
angleFromPlanar = 0.6
And here is how I call it:
extern "C" __declspec( dllexport ) BoolResults RemeshMesh( Mesh * mesh, float targetLength, float shift, int iterations, float sharpAngle )
{
RemeshSettings settings = RemeshSettings();
settings.targetEdgeLen = targetLength;
settings.finalRelaxIters = iterations;
settings.maxBdShift = shift;
MR::UndirectedEdgeBitSet edgeBitSet = mesh->findCreaseEdges( sharpAngle );
settings.notFlippable = new MR::UndirectedEdgeBitSet( edgeBitSet );
settings.packMesh = true;
remesh( *mesh, settings );
}
Great that it works in most cases.
In this particular case, I was unable to reproduce the bug. The result produced with your settings are as follows:
There is a memory leak in your code:
MR::UndirectedEdgeBitSet edgeBitSet = mesh->findCreaseEdges( sharpAngle );
settings.notFlippable = new MR::UndirectedEdgeBitSet( edgeBitSet );
The correct way is to write:
MR::UndirectedEdgeBitSet edgeBitSet = mesh->findCreaseEdges( sharpAngle );
settings.notFlippable = &edgeBitSet;
If the bug still persists. Please show the full call stack where it happens.
Thanks for pointing out the memory leak. I've changed it accordingly but still get the same exception.
It happens here: https://github.com/MeshInspector/MeshLib/blob/191dc21785c67fe90252fabe63789dedf29f736d/source/MRMesh/MRRegionBoundary.cpp#L397-L403
Here is the call stack:
ucrtbased.dll!00007ff975e1eaa5() Unknown No symbols loaded.
ucrtbased.dll!00007ff975e1e8c3() Unknown No symbols loaded.
ucrtbased.dll!00007ff975e2158f() Unknown No symbols loaded.
> MRMesh.dll!MR::Vector<MR::MeshTopology::HalfEdgeRecord,MR::Id<MR::EdgeTag>>::operator[](MR::Id<MR::EdgeTag> i={...}) Line 61 C++ Symbols loaded.
MRMesh.dll!MR::MeshTopology::org(MR::Id<MR::EdgeTag> he={...}) Line 62 C++ Symbols loaded.
MRMesh.dll!MR::getIncidentVerts_(const MR::MeshTopology & topology={...}, const MR::TaggedBitSet<MR::UndirectedEdgeTag> & edges={...}) Line 399 C++ Symbols loaded.
MRMesh.dll!MR::getIncidentVerts(const MR::MeshTopology & topology={...}, const MR::TaggedBitSet<MR::UndirectedEdgeTag> & edges={...}) Line 444 C++ Symbols loaded.
MRMesh.dll!MR::remesh(MR::Mesh & mesh={...}, const MR::RemeshSettings & settings={...}) Line 961 C++ Symbols loaded.
MRMesh.dll!RemeshMesh(MR::Mesh * mesh=0x00000224aca6aef0, float targetLength=1.00000000, float shift=0.00000000, int iterations=10, float sharpAngle=0.600000024) Line 175 C++ Symbols loaded.
Thanks, now I can reproduce it. The problem happens only when settings.packMesh = true
. You can put it into false
for now, and we are working on a fix.
I can confirm that with settings.PackMesh = false;
the remeshing step works.
It does, however, break my logic of copying data back to managed code. The following now generates an invalid mesh.
[EDIT] adding mesh->pack();
after the remeshing step fixes this.
struct BoolResults {
int* Faces;
int FacesLength;
float* Vertices;
int VerticesLength;
};
BoolResults result = BoolResults();
result.VerticesLength = mesh->topology.numValidVerts() * 3;
result.Vertices = new float[result.VerticesLength];
size_t i = 0;
for ( auto v : mesh->topology.getValidVerts() )
{
result.Vertices[i] = mesh->points[v].x;
result.Vertices[i + 1] = mesh->points[v].y;
result.Vertices[i + 2] = mesh->points[v].z;
i += 3;
}
result.FacesLength = mesh->topology.numValidFaces() * 3;
result.Faces = new int[result.FacesLength];
i = 0;
VertId v[3];
for ( FaceId f : mesh->topology.getFaceIds( nullptr ) )
{
mesh->topology.getTriVerts( f, v );
result.Faces[i] = ( uint32_t )v[0];
result.Faces[i + 1] = ( uint32_t )v[1];
result.Faces[i + 2] = ( uint32_t )v[2];
i += 3;
}
Yes, for not-packed meshes your code will not work because here
size_t i = 0;
for ( auto v : mesh->topology.getValidVerts() )
{
result.Vertices[i] = mesh->points[v].x;
result.Vertices[i + 1] = mesh->points[v].y;
result.Vertices[i + 2] = mesh->points[v].z;
i += 3;
}
you skip not-valid vertices, so the indices of valid vertices change.
And here
for ( FaceId f : mesh->topology.getFaceIds( nullptr ) )
{
mesh->topology.getTriVerts( f, v );
result.Faces[i] = ( uint32_t )v[0];
result.Faces[i + 1] = ( uint32_t )v[1];
result.Faces[i + 2] = ( uint32_t )v[2];
i += 3;
}
you assume that all vertex indices are unchanged. So it works only for packed meshes, where there are no invalid elements.
Possible solutions:
mesh.pack()
after remesh
(till the fix in remesh
is done)for
-loop, process all vertices (and not only valid ones).Please check the fix in master branch. Now remesh
shall properly support settings.packMesh = true
.
Thanks for fixing it @Fedr, I can confirm that it now works with settings.packMesh = true
. The remeshing functionality is really awesome now!
MeshLib is at least 10x faster than the libraries I've been comparing it with. The resulting uniformity is at least as good if not better than what the other libraries produce:
https://github.com/MeshInspector/MeshLib/assets/49192999/ee3beb69-5602-436d-8333-a16730118937
Thanks a lot for such a fantastic package!
Thanks, we are very pleased to hear that!
Encouraged by the performance of mesh booleans and polyline-mesh intersections, I took the remeshing function for a spin: https://github.com/MeshInspector/MeshLib/blob/9fdcf2c5180513373bf7e9c5b61672f7f2535978/source/MRMesh/MRMeshDecimate.h#L211
Below is a comparison with Geometry Central and cinolib
https://github.com/MeshInspector/MeshLib/assets/49192999/48e09b2f-8740-4be0-8ad7-1de7ef60fffa
In the upper left corner you see the timings and MeshLib is a clear winner here, especially with higher amount of remeshing iterations. There are, however, a few issues I can see with the results:
For reference, here is how I call this function: