mfem / mfem

Lightweight, general, scalable C++ library for finite element methods
http://mfem.org
BSD 3-Clause "New" or "Revised" License
1.67k stars 486 forks source link

Element transforms & 4D mesh refinement #259

Closed kvoronin closed 6 years ago

kvoronin commented 7 years ago

Hello,

I am currently thinking of extending the mfem's meshing to 4D w.r.t to constructing a hierarchy of meshes and getting the interpolation matrices between the levels. Now I have the 4d mesh with pentatopes (4D counterparts of tetrahedrons) but to implement Update() for FiniteElementSpace I need to set up CoarseFineTr in 4d case I believe. So, I'd like to ask about the variable "transform" in, say, Tetrahedron class and Bisection(). As far as I understand, "transform" is defined during element refinement process and then used in GetRefinementTransforms() to define point matrices in CoarseFineTr. Am I right?

How different types of "transform" are defined? How exactly the longest edge of tetrahedron is used in this process? Could someone please explain why there are different types of tetrahedron considered - Tetrahedron::TYPE_M, Tetrahedron::TYPE_A, ... and what is the idea behind?

I would be very grateful for any piece of advice.

Thanks, Kirill

jakubcerveny commented 7 years ago

Hi Kirill,

You are right. I believe the algorithm used for tet bisection comes from

Arnold, et al.: Locally Adapted Tetrahedral Meshes Using Bisection, SISC, 2000 http://epubs.siam.org/doi/abs/10.1137/S1064827597323373

The algorithm identifies several situations (TYPE_X) and determines how to bisect tets so that the angles in the mesh do not deteriorate. We don't implement the algorithm exactly but what we have seems sufficient. I think there is a rule in MFEM that the first edge of any tet is always the longest edge, to simplify the algorithm.

I can dig in the code if you have any more questions.

Jakub

On Sat, Aug 26, 2017 at 2:13 AM, kvoronin notifications@github.com wrote:

Hello,

I am currently thinking of extending the mfem's meshing to 4D w.r.t to constructing a hierarchy of meshes and getting the interpolation matrices between the levels. Now I have the 4d mesh with pentatopes (4D counterparts of tetrahedrons) but to implement Update() for FiniteElementSpace I need to set up CoarseFineTr in 4d case I believe. So, I'd like to ask about the variable "transform" in, say, Tetrahedron class and Bisection(). As far as I understand, "transform" is defined during element refinement process and then used in GetRefinementTransforms() to define point matrices in CoarseFineTr. Am I right?

How different types of "transform" are defined? How exactly the longest edge of tetrahedron is used in this process? Could someone please explain why there are different types of tetrahedron considered - Tetrahedron::TYPE_M, Tetrahedron::TYPE_A, ... and what is the idea behind?

I would be very grateful for any piece of advice.

Thanks, Kirill

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/mfem/mfem/issues/259, or mute the thread https://github.com/notifications/unsubscribe-auth/ABSW-0ccd-bn-zAkV0yTwGja6loViPAkks5sb2NCgaJpZM4PDR7A .

v-dobrev commented 7 years ago

When a tet mesh is marked for refinement with Mesh::MarkForRefinement() the vertices of each tet are permuted so that the longest edge of the tet becomes the edge between vertices 0 and 1. It is important that the longest edge in each tet is chosen consistently in neighbor tets, so the algorithm in mfem uses a global sort of all edges by length to determine the longest edge in each tet. The idea is that the edge between vertices 0 and 1 becomes the marked edge, i.e. the edge that will be bisected during refinement. Initially, this is the longest edge in the element (with equal length edges ordered according to the global sort). However, later, the bisection algorithm may choose to mark an edge that is not the longest. When a tet is bisected, its type (M, A, etc.) determines which edges in the two children become marked, as well as what types are assigned to them. The initial type of the tet is determined also based on the globally sorted edges - see the method Tetrahedron::MarkEdge() in tetrahedron.cpp for more details.

kvoronin commented 7 years ago

Thanks, @tzanio and @jakubcerveny! Do you know the reason for introducing Pu and Pf types, I mean the difference between them? More precisely, I am not sure why the case Pf is introduced. Is the mesh deteriorating without it?

v-dobrev commented 7 years ago

The Pu and Pf types come from the paper that Jakub linked above. My guess is that without the distinction of the Pu and Pf types you cannot write an algorithm that guarantees preservation of the quality of the generated tets.

Note that the algorithm implemented in MFEM is not exactly the algorithm from the paper, specifically when performing local refinement (i.e. when not all elements in the mesh are refined). In that case, one can observe some deterioration of the quality of the tets after consecutive local refinements. With uniform refinement (i.e. when all elements are refined), the quality of the tets may get worse only after the first two levels of uniform refinement. After that no new shapes of tets can be generated through uniform refinement.

The modification to the algorithm we use is the following: after refinement (including propagation of forced refinements) we replace the type of the Pf tets with type Pu.

kvoronin commented 7 years ago

@v-dobrev, May I ask what was the purpose of the modification? I think it pretty much affects the proof in Arnold's paper. @v-dobrev, @jakubcerveny, Do you know a 4-dimensional (or n-dimensional) algorithm which has a proof of quality? I've found a recent short survey by Mitchell http://math.nist.gov/~WMitchell/papers/wfmICNAAM15full.pdf but there is no algorithm mentioned which will work for general simplicial meshes and has a proof for dim > 3.

v-dobrev commented 7 years ago

Yes, with the modification of the algorithm described above, there is no guarantee that the quality of the tets will not deteriorate.

The reason for the modification is to avoid the cases where more than 3 levels of bisection need to be performed in a single call to Mesh::LocalRefinement. To support that case, one needs a more flexible data structure than the DSTable class, e.g. DSTable does not allow insertion of newly created vertices and the corresponding newly created edges. It may be possible to use the data structures used in class NCMesh, however, we have not explored that possibility yet.

I do not know the algorithms for local refinement of simplices in 4- or higher-dimensions. Maybe @kennyweiss, @marneum, or @jakubcerveny know more and can point you in the right direction?

kvoronin commented 7 years ago

Thanks, @v-dobrev.

Could you explain a bit about how these 3 levels of bisection can happen in the current implementation? I thought that there will be a trouble with newly created edges already if inside LocalRefinement there will be a bisection of second level which will try to check the middle array for the tetrahedrons created during the first level of bisection. Is this wrong?

v-dobrev commented 7 years ago

The algorithm actually guarantees that the first three levels of bisection will not try to refine any of the original edges more than once. Also, if you do exactly three levels of bisection, then all 6 edges of the tet will be refined - split in 2.

kvoronin commented 7 years ago

I thought that if we do three bisection levels for a given tetrahedron, we will have 1->2->4 tetrahedrons and totally 7 edges refines so one of the edges will necessarily be either a new edge which was not added to middle[] or the second refinement of one of the original 6 edges of the tetrahedron. Or is it the case when at each bisection level there is only one tetrahedron divided?

v-dobrev commented 7 years ago

What mean is: each arrow denotes one bisection, so we get 1->2->4->8 elements at the end. Also, some of the 2nd and 3rd level bisections will use the same edge as the refinement edge in two different elements, so you bisect two elements but bisect only one edge.

kvoronin commented 7 years ago

Thanks!

May ask also how the refinement is propagated? E.g., when the red refinement is done and one process obtains an information that, say, one edge of its shared face was bisected by the other process in the neighboring tetrahedron and was not bisected by the current process, then this edge is marked (in the end of do{}while(need refinement) loop in ParMesh::LocalRefinement) and a call to Bisection will follow in the next iteration of this do-while loop. But Bisection always splits the refinement edge which connects the first two vertices after appropriate vertex reordering and is defined in general following the marking rules. And we need to bisect a particular edge which is mentioned by the neighboring process. How it turns out that the refinement edge which was defined by the process locally (before the call to LocalRefinement) is the same as the edge which is obtained through the mpi communication with the neighbor?

v-dobrev commented 7 years ago

The tets are marked consistently across processors - as inherited from the serial mesh before the parallel partitioning. The consistently marked tets guarantee that a face between any two tets will be refined the same way from both sides. Essentially, the process of marking a tet specifies how each of its faces will be refined.

kvoronin commented 7 years ago

Why then one needs to exchange refined_edge[5][3] (temporary array in LocalRefinement for tetrahedral case) which stores all possible combinations of edged bisected in the shared face? I mean, isn't it enough to know only the fact that the shared face was refined but not exactly how?

v-dobrev commented 7 years ago

If performing uniform refinement, yes, there will be no need to exchange any information. However, in the case of local refinement you need to know which of the 5 possible cases of face refinement was actually performed on the other side of a shared face. The 5 cases are as described in the comment before the implementation of ParMesh::GetFaceSplittings.

kvoronin commented 7 years ago

Thanks, I was thinking only of the full octasection when I asked the previous question. How do you ensure that after 3 levels of bisection all edges of the initial tetrahedron are split once and only once? I mean, what was the design idea?

v-dobrev commented 7 years ago

This property follows from the algorithm in the paper.

kvoronin commented 7 years ago

Is it correct to say that in the implemented algorithm for tetrahedrons the new edge (created during the first 3 levels of bisection) is never split? Is is true, that during green refinement the 3D algorithm just looks at the faces from the neighbors and then does bisection after bisection until the faces of the considered tetrahedron have the same refined edges?

v-dobrev commented 7 years ago

Is it correct to say that in the implemented algorithm for tetrahedrons the new edge (created during the first 3 levels of bisection) is never split?

Yes, each of the 6 edges of a tet can be split at most 1 time when using up to 3 levels of bisection.

Is is true, that during green refinement the 3D algorithm just looks at the faces from the neighbors and then does bisection after bisection until the faces of the considered tetrahedron have the same refined edges?

During green refinement, every tetrahedron is checked if it "needs refinement" by calling the method Tetrahedron::NeedRefinement and if it does, the element is bisected once. The method NeedRefinement returns true if any of its edges have been refined. When a tet is bisected, it is replaced (in the list of elements) by one of its children and the other child is appended at the end of the element list. That way, the children will be checked if they need refinement in the next loop over the elements. If no elements "need refinement", the green refinement step is done.

kvoronin commented 7 years ago

Thanks! As far as I understand, the in 4D what is needed is essentially a set of rules (for certain types of 4-simplices) which will after 4 levels of bisection decompose the original 4-simplex into 16 subsimplices in a compatible way and so that every initial edge is split only once. The compatibility can be ensured if every time the shared face will be refined in the same way by the neighboring subsimplices. It seems quite complicated still. Mainly because in 4D case there is not one, but three edges in the 4-simplex which are opposite to the given one.

What do you think of an idea to make a computational brute-force over all possible rules to find a working combination? I am thinking of marking one edge for each 4-simplex face, so that there is a refinement edge and two marked edges for the rest 2 faces (tetrahedrons). And looking at 3D case, where there are flagged and unflagged tetrahedrons, I will need to introduce possibility of flags into the rules (e.g. making rules of the form (type, color)--> (type_subsimplex1, color1) + (type_subsimplex2, color2) which will define types and colors for children. I understand this is kind of open question, but with your experience, do you think such a brute-force can lead to a solution?

marneum commented 7 years ago

Hi Kirill,

in the work https://link.springer.com/article/10.1007/s00791-012-0174-z we have given all possible decompositions of a pentatope into 16 smaller pentatopes ("all" in the sense that one new node is introduced at each edge). The idea is to fix one of the three edges of the boundary tetrahedron and look for a decomposition of the pentatope. With this strategy "only" 87 decompositions are possible. Twelve of them are the one which can be obtained by the algorithm of Freudenthal (we called them cyclic, since the graph of the fixed boundary edges form a cyclic graph) and we also found 75 other decompositions. But only for the cyclic decomposition it is ensured that the refinements will not degenerate, see https://link.springer.com/article/10.1007/s002110050475.

For adaptive refinement in 4d I would recommend using a non-conforming mesh and adding constraints for the conformity. So the already implemented red-refinement which guarantees that the elements do not degenerate can be used. And the conformity constraints can be eliminated algebraically in the usual way.

I hope this helps a little bit.

Best Martin.

kvoronin commented 7 years ago

Thanks for the idea, @marneum! Can you please explain (or give a reference) of what do you mean by eliminating conformity constraints? Do you mean eliminating them algebraically in the local matrices in the neighbors of the red-refined pentatopes so that first a local matrix for neighbor is taken as if it was red-refined, and then additional dofs in the midpoints of the edges are eliminated by saying that they are just averages of the dofs at the edge vertices?

marneum commented 7 years ago

Yes exactly, I mean eliminating them algebraically as it is done already in MFEM for non-conforming meshes. Of course this will require also some effort but I think it is the easiest option.

Best Martin.

kvoronin commented 6 years ago

@tzanio,

May I ask again about the modification of the paper's algorithm in MFEM which is unflagging Pf tetrahedrons? I have developed an algorithm to some extent in 4D, but unlike 3D case, there is a loop A1->A2->P1->P2->A1, and type P (with exactly the same definition as in 3D) shouldn't be both at the the beginning and at the second step of the refinement. So, it seems that one needs to replace both P1 and P2 (equivalents of Pu and Pf) by other types. This will require not only to change the flag, but essentially to replace one of the marked edges in such a way that the markers remain compatible. Do you think this is feasible?

KazemKamran commented 6 years ago

Hi Kirill,

Regarding mesh adaptation, I would like to add that In addition to what is already implemented in MFEM, we have integrated through CEED project, a fully automated 3D mesh adaptation in MFEM. The package is called PUMI and you can find more information about it in CEED website and references therein: http://ceed.exascaleproject.org/pumi/

In simple words, given a size field, for tetrahedral conforming meshes, PUMI API's provide necessary refinement/derefinement to provide a quality mesh that respect the size field. If you think it might be useful to what you are doing let us know and we may be able to help.

Thank you Kazem

tzanio commented 6 years ago

@kvoronin wrote:

This will require not only to change the flag, but essentially to replace one of the marked edges in such a way that the markers remain compatible. Do you think this is feasible?

I think it is feasible, but will probably be challenging (in particular with respect to debugging :))

tzanio commented 6 years ago

Closing this issue now, @kvoronin feel free to re-open if you have additional questions.

kvoronin commented 6 years ago

@v-dobrev,

Are there any guarantees, that refine-to-conformity (closure) algorithm will not end up in a uniform refinement of the entire mesh? In the paper mentioned above (as I understand it) they provide only this type of a bound which seems unpractical.

Btw I think the paper's authors also unmark Pf types after each refinement step. Otherwise I don't see how their proof works, if at the start of the second refinement process one of the tetrahedrons, which was refined to a Pf type as a neighbor during the first refinement process, is marked for refinement (e.g. if enlarge the group of tetrahedrons marked to be refined for the second process).

v-dobrev commented 6 years ago

I do not know of any algorithmic guarantees about locality - just what you see in the paper.

I think if you un-mark Pf types you lose the guarantee for shape quality preservation.