Open nschloe opened 5 years ago
The algorithm works by iteratively inserting points whose position is chosen to best "break" a given element into smaller elements. That position is usually the circumcenter of the Delaunay ball. Thus, this new position is unfortunately dependent on the positions at the step 'n-1'.
However, there could be (at least) two ways to not compute all positions one by one:
The algorithm has multiple phases: initialization, refinement, optimization. The initial sampling of the domain is computed by shooting random rays and is meant to obtain a triangulation before the circumcenter-based refinement process kicks in. If you know you're going to have many vertices in your mesh, you could greatly increase the amount of points used in the initial sampling, and compute all these points in a batch. Of course, this initial random triangulation will likely be far from honoring your constraint and some refinement (and optimization) will still need be needed.
At a given step of the refinement, it is likely that all simplices of the mesh must be refined, and thus it is useful to know their circumcenter, which will be the position of some future vertex being inserted. Thus, it might be possible to only compute these circumcenters - and cache them - in a batch once in a while, and when it is required by the refinement algorithm. Unfortunately, that would be quite a few very low level changes...
@lrineau and @janetournois might have better ideas.
computed by shooting random rays
Since the rays are all independent, they could be computed at the same time.
it is likely that all simplices of the mesh must be refined,
Yes, I assumed that there are operations which take place for all cells individually.
Unfortunately, that would be quite a few very low level changes...
I suppose. In the end, it should come down to looking for loops like
for (auto cell in cells) {
// evaluate domain.operator()(...)
}
This came back to bite me again.
For me to understand, do you build the mesh as follows? Build a basic mesh and then refine iteratively, i.e.,
domain.operator()
to keep the new point inside,Do you optimize after each step? Where are the most operator()
calls spent?
For me to understand, do you build the mesh as follows? Build a basic mesh and then refine iteratively, i.e.,
* picking a bad cell, * inserting a node at the circumcenter (or some other point) of the cell,
the new node does not have to be in the cell, but in its Delaunay ball, to make sure the cell disappears from the mesh
* call `domain.operator()` to keep the new point inside,
This kind of operator is called on newly created cells to keep the "restricted Delaunay triangulation" valid
* doing some other operations to keep the mesh conforming,
the point is inserted in the regular triangulation while keeping it valid, and the "domain status" is updated above. There are no extra-operations here
* and repeat.
Do you optimize after each step? Where are the most
operator()
calls spent?
Optimization is used only after the end of mesh refinement.
The operator()
you mention (I guess you mean the one that updates the cells status wrt the restricted Delaunay triangulation) is called a lot during optimization, because each cell is updated after each move. In global optimizers though, it is possible to compute all moves, apply them all, and then update the full restricted Delaunay triangulation at a time.
Thanks @janetournois for the reply.
The operator() you mention
This is the operator that, evaluated at a geometrical point x
, returns the signed distance to the domain boundary. Did you understand it this way?
Yes, it is part of what we call the "oracle".
The operator() you mention (I guess you mean the one that updates the cells status wrt the restricted Delaunay triangulation) is called a lot during optimization, because each cell is updated after each move.
Okay great. If all cells are updated after each step, then at least the algorithm would allow for calling operator()
with many points at once, right?
Theoretically yes, but IIRC the code does :
for each vertex v
compute_move(v)
apply_move(v)
call oracle on all impacted cells (about 30)
To call oracle on more cells at once, it would require changing the code for
for each vertex v
compute_move(v)
for each vertex v
apply_move(v)
call oracle on all cells
The oracle is called on geometric points x
, so perhaps it really goes like
for each vertex v
compute_move(v)
apply_move(v)
for adjacent_cell in adjacent_cells(v): # (about 30)
call oracle on point in adjacent_cell
I would already help to
for each vertex v
compute_move(v)
apply_move(v)
call oracle on points in all adjacent_cells at once (about 30)
Better of course:
for each vertex v
compute_move(v)
apply_move(v)
call oracle on all cells
The oracle is called on geometric points
x
, so perhaps it really goes likeNote the oracle is called on the circumcenter of each cell
for each vertex v compute_move(v) apply_move(v) for adjacent_cell in adjacent_cells(v): # (about 30) call oracle on point in adjacent_cell
it's not only adjacent cells to
v
but all cells impacted by the move (remove(v)
+insert(new_p)
), namely the union of :
- the cells created by
remove(v)
(filling the cavity left instead ofincident_cells(v)
)- the cells created by
insert(new_p)
, i.e.incident_cells(new_v)
(both zones usually overlap, but they may not)I would already help to
for each vertex v compute_move(v) apply_move(v) call oracle on points in all adjacent_cells at once (about 30)
Absolutely
Functions like
Mesh_domain::create_implicit_mesh_domain
are called with domain objects that a haveThis is an evaluation returning the signed distance of a point
p
to the domain boundary. For mesh generation, this operator is then called with point after point.For many, if not all of my domains, I could compute the distance much faster if I wasn't forced to do it one after the other, but with many points at once. (Using higher-level BLAS routines, for example.)
Is it possible to make the the mesh generation ask for many points at once?