Closed nschloe closed 3 years ago
I'll keep this open as I'm interested in using optimesh
in lieu of traditional Laplacian smoothing.
Also, the method I use stops based on a node movement tolerance so it normally converges in less than 20 iterations but your point is well-taken.
Also if you could point me to where you solve this system with the multigrid method in optimesh that'd be interesting to me.
I suppose you could pose these mesh optimization problems using Firedrake/Feincs?
https://github.com/nschloe/optimesh/blob/master/optimesh/cpt.py#L55
I suppose you could pose these mesh optimization problems using Firedrake/Feincs?
You just need any old SPD solver. If the equation system is small, you could get away with Gaussian elimination, too.
cool. I will investigate.
So I'm getting around to investigating this in more detail.
I'm wondering what method should be used by optimesh
by default at the end of SM. I'd like to have more user-configuration about this. I realize mesh generation is different from mesh improvement as you point out, with that said, it is my experience that end users just want high quality meshes without thinking a lot.
I suppose based on your README.md it seems optimesh.cvt.quasi_newton_uniform_full
performs the best. However, this doesn't preserve mesh resolution distributions, right? What does the full
imply here?
However, this doesn't preserve mesh resolution distributions, right?
Correct. It's possible to add this in principle, but I haven't gotten around doing it.
What does the full imply here?
I should write that down sometime. Short version: When formulating the optimization problem, you'll have to solve a linear equation system in every step. That's expensive, so there are variants. Just taking the diagonal curiously results in Lloyd's method. You can also take just the 2x2 diagonal blocks. It performs better than Lloyd alone, and on par with overrelaxed Lloyd. Or you can take the "full" matrix. Disadvantage besides the density not being implemented yet: It's not stable, so if the mesh isn't presmoothed with Lloyd or block, it will break down.
Short version: When formulating the optimization problem, you'll have to solve a linear equation system in every step. That's expensive, so there are variants. Just taking the diagonal curiously results in Lloyd's method. You can also take just the 2x2 diagonal blocks. It performs better than Lloyd alone, and on par with overrelaxed Lloyd. Or you can take the "full" matrix. Disadvantage besides the density not being implemented yet: It's not stable, so if the mesh isn't presmoothed with Lloyd or block, it will break down.
Highly interesting how it generalizes. I suppose I could do
Disadvantage besides the density not being implemented yet: It's not stable, so if the mesh isn't presmoothed with Lloyd or block, it will break down.
I've had similar experiences in mesh improvement strategies if the pre-smoothed mesh isn't of sufficient high-quality.
I think what I should perhaps do is leave the Laplacian smooth that I wrote in SM as a user-defined option (on by default) and then defer to optimesh
for other forms of mesh improvement. I'd prefer this rather than adding more code to the generation module.
I think what I should perhaps do is leave the Laplacian smooth that I wrote in SM as a user-defined option (on
Note that you can improve the classical Laplace smooething by looking at it as a fixed-point problem, and then simply solve an equation system Ax=b. IIRC I detailed this somewhere in another bug report. This equation system is symmetric + positive-definite so it can be solved very fast. I think a solve costs as much as about 20 matrix-vector products, so it's probably worth it. That is to be investigated perhaps.
Note that you can improve the classical Laplace smooething by looking at it as a fixed-point problem, and then simply solve an equation system Ax=b. IIRC I detailed this somewhere in another bug report. This equation system is symmetric + positive-definite so it can be solved very fast. I think a solve costs as much as about 20 matrix-vector products, so it's probably worth it. That is to be investigated perhaps.
Yes, I recall where you mentioned that. I'll work on that.
(I - A) x = rhs where the RHS is 0 except for the boundary conditions. (The boundary node coordinates don't change.)
So for the RHS, it's the boundary coordinates in the right places and 0 everywhere else?
Right. You have multiple right-hand-sides, one for each spatial coordinate.
Okay, so the assumption here solving the smoothing used a fixed-point approach is that the mesh is already high-quality?
From what I know about fixed-point problems, you're finding an argument that when passed to the function, returns the same argument (e.g., in this case position of the nodes)?
Okay, so the assumption here solving the smoothing used a fixed-point approach is that the mesh is already high-quality?
Nah, I think this works every time.
From what I know about fixed-point problems, you're finding an argument that when passed to the function, returns the same argument (e.g., in this case position of the nodes)?
When doing Laplace "stepping", you go
x_{n+1} = A x_n
until it doesn't move too much anymore, right? The limit (if it exists) would be called a fixed point since it maps to itself, i.e.,
x = A x
right? (The limit always exists if the map is a so-called contraction.)
Okay I think I understand. Let me try to code it now. It should be fairly straightforward as I already have A
and know who are the boundary vertices.
Hm, tried the following but getting spaghetti meshes.
def _sparse(Ix, J, S, shape=None, dtype=None):
"""
Similar to MATLAB's SPARSE(I, J, S, ...)
"""
# Advanced usage: allow J and S to be scalars.
if np.isscalar(J):
x = J
J = np.empty(Ix.shape, dtype=int)
J.fill(x)
if np.isscalar(S):
x = S
S = np.empty(Ix.shape)
S.fill(x)
# Turn these into 1-d arrays for processing.
S = S.flat
II = Ix.flat
J = J.flat
return spsparse.coo_matrix((S, (II, J)), shape, dtype)
def laplacian2_fixed_point(vertices, entities):
"""Solve the laplacian smoothing problem as a fixed point problem
solve this once, i.e., (I - A) x = rhs vs.
repeating this x_{n+1} = A x_n
:param vertices: vertex coordinates of mesh
:type vertices: numpy.ndarray[`float` x dim]
:param entities: the mesh connectivity
:type entities: numpy.ndarray[`int` x (dim+1)]
:return vertices: updated vertices of mesh
:rtype: numpy.ndarray[`float` x dim]
:return: entities: updated mesh connectivity
:rtype: numpy.ndarray[`int` x (dim+1)]
"""
if vertices.ndim != 2:
raise NotImplementedError("Laplacian smoothing is only works in 2D for now")
n = len(vertices)
matrix = _sparse(
entities[:, [0, 0, 1, 1, 2, 2]],
entities[:, [1, 2, 0, 2, 0, 1]],
1,
shape=(n, n),
)
bnd = get_boundary_vertices(entities)
matrix = matrix.tocsr()
# Apply Dirichlet conditions.
# Set all Dirichlet rows to 0.
for b in bnd:
matrix.data[matrix.indptr[b] : matrix.indptr[b + 1]] = 0.0
# Set the diagonal and RHS.
d = matrix.diagonal()
d[bnd] = 1.0
matrix.setdiag(d)
rhs = np.zeros((n, 2))
rhs[bnd] = vertices[bnd]
vertices_new = spsolve(matrix, rhs)
return vertices_new, entities
Are the boundaries staying where they are at least? If yes, the issue is probably with
entities[:, [0, 0, 1, 1, 2, 2]],
entities[:, [1, 2, 0, 2, 0, 1]],
Are the boundaries staying where they are at least? If yes, the issue is probably with
Unfortunately negative. The boundaries indeed move around. Strange; plotting these boundary indices appears correct.
Why do you suggest this is likely a problem with the building of the sparse matrix?
Unfortunately negative. The boundaries indeed move around.
That's good, the problem should be easy to find then. I'd make a small mesh with like 10 points and check by print()
ing that the matrix and rhs are correct.
Cool, got it working. The matrix wasn't formed correctly, as you suspected.
https://github.com/krober10nd/SeismicMesh/pull/189/commits/c0056f15d10f0663df987d16908e91eacf3329d9
So one linear solve now.
I guess multigrid would really be the best option to solve the system here? I'm not sure how scipy.linalg.spsolve()
solves the system...
I guess multigrid would really be the best option to solve the system here?
You'd ideally solve with CG preconditioned with MG. The only MG package I know is PyAMG and I'm not sure how it performs against spsolve (which uses Gaussian elimination with pivoting IIRC). Would be interested to hear how it performs.
Well this is significantly faster (~50%) than the spsolve
for a ~1 million vertex mesh and becomes faster with larger problem sizes.
# use AMG
ml = pyamg.ruge_stuben_solver(matrix)
vertices_new = np.column_stack([ml.solve(rhs[:, 0]), ml.solve(rhs[:, 1])])
Gaussian elimination with pivoting I believe is O(n^3)?
Pretty impressive how pyamg
only uses numpy
and scipy
!
Let me say something about the Laplace smoothing in SM. This is not an issue per se, so I'll immediately close it.
I realize that Laplace improves the mesh quality, and it will do the same for all meshes. That includes meshes generated by other mesh generators, so you could also write a wrapper for gmsh that does what gmsh does plus a few steps Laplace. Is this now a better mesh generator? It's always a question of quality vs. generation time, so more of a choice of the mesh generation authors. If you add Laplace, you're saying "ah, it's slow anyway, so at least let the mesh quality be stellar." That's an understandable thing to do.
Some questions go in the same direction: Why Laplace smoothing and not some other method? Why 20 steps and not 50 or 3? Should this be user-configurable?
Also, classical Laplace is perhaps not the best choice of smoothers that you can make. Instead of 20 steps you could do infty: Note that when doing the update
you could immediately jump to the fixed point of the contraction by solving
where the RHS is 0 except for the boundary conditions. (The boundary node coordinates don't change.) This involves solving a symmetric positive-definite equation system which can be done in O(n) (with multigrid, for example). More costly than 3 Laplace steps for sure, but maybe not more costly than 20. That's something to consider at least. (optimesh does that, too.)