Closed pca006132 closed 5 months ago
Yeah, that sounds good. And actually, I think Delaunay cost should work too - we only care within a bounded region: Delaunay only matters for points closer to the edge midpoint than its half-length. So we should still be able to safely screen out more distant points with BHV.
btw wonder if it is possible to move some of the sort by morton code logic into the collider.
btw wonder if it is possible to move some of the sort by morton code logic into the collider.
Or in its own header file. Still, it's important the sort the actual underlying data structures, since that gives us cache coherence. So the Morton codes are for more than just the collider.
I know how to do triangulator optimization very well, but where to import its source file?
Adding a comment from #751:
Optimizations related to triangulator is interesting, I am open to adding fast paths, but a more general one (e.g. monotone triangulation, but doesn't handle epsilon), and directly into the triangulator rather than limited to
extrude
.
This is starting to hurt for some OpenSCAD usage too. It's also surprisingly slow in debug mode:
^ not very scientific, just measuring OpenSCAD e2e processing time.
Yeah, debug mode adds a lot of extra checks - we should probably go through and see how many are necessary. And you do mean just MANIFOLD_DEBUG
right? Because actual C++ Debug mode is horrifically slow for some reason.
Yeah, we should get this done - it shouldn't even really be so difficult. What kind of example is this where you see the slowdown? Care to make a TEST
of it?
The polygon in question is #831 - might be a tad big to add as a performance test
Thx - looks very promising so far!
The triangulator takes
O(n^2)
time, which performs poorly when the number of points of a polygon increases. For example, for the woodgrain svg we had for #502, there are around 60k points in the polygon. The old triangulator (not as robust) takes 35ms to triangulate the 60k points, while our current triangulator takes around 11500ms (11.5s!). We should try to use bvh to reduce the number of points to check, to avoid quadratic complexity in the general case (for degenerate cases bvh can't help much I guess).Collider
generic and support 2DRect
. This is not necessary, but will probably help with performance and is not hard to do.Remove
method toCollider
to support point removal. We don't have to care about rebalancing because the height is bounded (at least in terms of complexity it does not matter, not sure about that in practice, and rebalancing is costly).DelaunayCost
. We can just categorize ears into different categories: definitely valid, potentially valid, definitely invalid, according to the max cost over query results.@elalish what do you think?