Open KulaginVladimir opened 4 months ago
I think to start with we could at least make sure that values in the txt file are sorted by x
@RemDelaporteMathurin
If there are interfaces, since TXTExport is only used in 1D, project to DG but only keep the duplicates where there are interfaces (based > on the materials border argument)
Based on your recent example , such filtering might be more complex.
Based on your recent example , such filtering might be more complex.
Yes for traps funny things could happen but I believe we should still do this as I don't expect it would drastically alter the profile
@RemDelaporteMathurin
If there are interfaces, since TXTExport is only used in 1D, project to DG but only keep the duplicates where there are interfaces (based > on the materials border argument)
if vertices of a mesh do not match the interfaces (borders), shall we try to keep duplicates near the interfaces?
Say:
model.materials = [
F.Material(
id=1,
D_0=1,
E_D=0,
S_0=1,
E_S=0,
borders=[0,0.5]
)
F.Material(
id=2,
D_0=1,
E_D=0,
S_0=2,
E_S=0,
borders=[0.5,1.0]
)
]
model.mesh = F.MeshFromVertices(
np.linspace(0, 1, 30)
)
The closest vertices to the interface at x=0.5
are x=0.48275862
and x=0.51724138
fenics
" interface is. I guess it's the closest vertex.So in your example, we could do something along the lines of:
for vertex in vertices:
if not isclose(vertex, interface_x):
# remove duplicate
I think the first point can be addressed in a separate issue.
As highlighted by @rekomodo on discourse,
TXTExport
produces duplicates in the output file, what may confuse a user.Some suggestions from @RemDelaporteMathurin to improve this behaviour: