Closed krober10nd closed 3 years ago
@nschloe this is a potential future solution to improve the parallel efficiency but results in some seams.
The method is analogous to solving two separate decoupled meshing problems at the same time. At the end of the iteration set, they begin exchanging "boundary conditions" so the connectivity across the border becomes valid and the triangulation is Delaunay.
Maintaining the step size a constant value in the sliver removal seems to perform less well than adapting it depending on its success at reducing slivers.
Per this experiment with the code below, it seems that a step change gamma
of 0.10 produces the most success in reducing slivers so I think this should be the default then.
import math
import matplotlib.pyplot as plt
from matplotlib import cm
from random import randint
import numpy as np
import SeismicMesh
min_dh_bound = 10 * math.pi / 180
max_dh_bound = 180 * math.pi / 180
# color depends on step change
cmap = cm.get_cmap("tab20", 10)
colors = cmap.colors
def calc_dh_angles(points, cells):
dh_angles = SeismicMesh.geometry.calc_dihedral_angles(points, cells)
out_of_bounds = np.argwhere(
(dh_angles[:, 0] < min_dh_bound) | (dh_angles[:, 0] > max_dh_bound)
)
ele_nums = np.floor(out_of_bounds / 6).astype("int")
ele_nums, ix = np.unique(ele_nums, return_index=True)
return ele_nums
def box_with_refinement(h, gamma):
cube = SeismicMesh.geometry.Cube((-1.0, 1.0, -1.0, 1.0, -1.0, 1.0))
def edge_length(x):
return h + 0.1 * np.sqrt(x[:, 0] ** 2 + x[:, 1] ** 2 + x[:, 2] ** 2)
points, cells = SeismicMesh.generate_mesh(
domain=cube, h0=h, edge_length=edge_length, verbose=0, seed=randint(0, 10)
)
points, cells = SeismicMesh.sliver_removal(
domain=cube,
points=points,
h0=h,
edge_length=edge_length,
gamma=gamma,
verbose=0,
)
return len(calc_dh_angles(points, cells))
# step change
gammas = np.linspace(1.0, 0.1, num=10)
NUM = 15
count = 0
# repeat x times
for _ in range(1):
for ix, gamma in enumerate(gammas):
hmin = np.logspace(-1.0, -2.0, num=NUM)
# number of slivers as a funciton of step
number_of_slivers = np.zeros(NUM)
for ixx, h in enumerate(hmin):
number_of_slivers[ixx] = box_with_refinement(h, gamma)
count += 1
print(count)
plt.plot(
hmin,
number_of_slivers,
"o-",
color=colors[ix],
label=["gamma = " + str(gamma)],
)
plt.xlabel("minimum edge length")
plt.ylabel("final number of slivers")
plt.legend()
plt.grid()
plt.show()
hm looks like this results in some distortion of the boundary elements after all.
associative containers causing tons of cache misses --> plan to switch to jagged array of vectors.
okay still quite a bit of cache misses (faults) in parallel as compared vs. serial. I suspect this is originating from how I'm using the pybind11 interface.
performance in 2d is better for the Bp2004 example (170k vertex mesh) (ignoring Laplacian smoothing).
variable step length in gradient ascent used to perform sliver removal.
more 3d min. quality tests
using a linear complexity approach to find the elemental neighbors for each vertex
moving vertex to entities into cpp
moving block and rectangle SDF to cpp.
modifications to CPP code based on line profiling.
Only perform halo vertex exchange in the beginning set iterations and at the last few.Duration of avoidance results in more apparent seams between subdomains.Scaling approximates linear given shorter duration at the expense of more seams.