libMesh / libmesh

libMesh github repository
http://libmesh.github.io
GNU Lesser General Public License v2.1
653 stars 286 forks source link

Proper Ghosted Elements #1028

Closed friedmud closed 6 years ago

friedmud commented 8 years ago

This is a follow-on from the discussion in #977 . I'm going to repost most of my comment from there:

We need the ability to just flag elements (any element) as needing to be "ghosted" locally... and libMesh should just treat that element as if it's semilocal. It should never get removed by ParallelMesh and its dofs should always be ghosted in all relevant ghosted vectors.

Basically: I want the ability to arbitrarily mark elements as needing to be treated just like the "one layer" of halo elements we always have.

This issue is not just relevant to periodic boundaries... it happens anytime you have a "gap" in the mesh and you want to keep elements on both sides of the gap for whatever reason.

For instance, in MOOSE this happens because of contact. Because of that we have quite a bit of code to try to force libMesh into not deleting elements and ghosting those extra degrees of freedom. To deal with this we have a few things:

  1. ghosted_boundaries. This is an input file parameter that a user can supply that will cause any elements connected to those sidesets to be ghosted to all processors (definitely non-optimal and heavy handed but it works for periodic and for contact boundaries). You can see the code that does the ghosting here:

    https://github.com/idaholab/moose/blob/devel/framework/src/mesh/MooseMesh.C#L2116

  2. A couple of years ago I added libMesh::DistributedMesh::add_extra_ghost_elem() that at least tries to mark elements so that they won't be removed. That function is here: https://github.com/libMesh/libmesh/blob/master/src/mesh/parallel_mesh.C#L1415

    And you can see the extra code not remove the elems flagged this way here:

    https://github.com/libMesh/libmesh/blob/master/src/mesh/mesh_communication.C#L1349

  3. We do a LOT of work in MOOSE to try to make sure the dofs on these "extra ghost elems" end up in ghosted vectors. It starts by keeping track of them here: https://github.com/idaholab/moose/blob/devel/framework/src/base/FEProblem.C#L913

    And then adding any dofs connected to those elements to the send_list here:

    https://github.com/idaholab/moose/blob/devel/framework/src/base/SystemBase.C#L377

  4. We have to do several EquationSystem::reinit() calls because of ghosted elements... we have to do one for adaptivity... and then another one after we re-adjust the ghosted elements (to get the vector ghosting right). You can see that here: https://github.com/idaholab/moose/blob/devel/framework/src/base/FEProblem.C#L3761 (Notice that above that is an _eq.reinit() which is the original reinit() call... and then the geometric search system might find more ghosted elements so we need to reinit() again to deal with them)

There is even more stuff in MOOSE to try do deal with "extra ghosted elems" but I think I've made my point: we need a unified way of dealing with this issue at the libMesh level.

In addition to the MOOSE stuff I have a new application where I want more than one layer of ghosted "halo" elements. I would like to be able to specify an arbitrary depth... and even if libMesh couldn't do that automatically... if I could run through the mesh and do that search myself and flag all the extra elements that need to be ghosted that would be a godsend.

Here's what I would like: one single place where each processor can register elements that need to be ghosted locally... and then everything else is automatically taken care of... they won't be removed by delete_remote_elems() and their dofs will automatically get added to the send list when vectors are reinitialized etc...

We need to get away from the idea that the only ghosting we want to do is a single layer of "bordering" elements. The idea of a "ghosted element" needs to be a "first class" idea in libMesh...

friedmud commented 8 years ago

Note: some discussion on this is happening over in a MOOSE issue: https://github.com/idaholab/moose/issues/7355#issuecomment-233452265

roystgnr commented 8 years ago

Getting some of our in-person discussion and my subsequent thoughts in writing:

We can think of three levels of "element dependencies". An element K1 has a coupling dependency on K2 if the dofs on K1 might (modulo the coupling matrix) need sparsity pattern entries for dofs on K2. An element K1 has an algebraic dependency on K2 if a processor which owns K1 might need to examine the solution dof values on K2. An element K1 has a geometric dependency on K2 if a processor which owns K1 might need to examine the geometry of K2. For any element K, we could call the set of algebraic-ghosted("coupled") elements C(K), call the set of solution-ghosted ("evaluable") elements E(K), and call the set of geometry-ghosted ("ghosted") elements G(K).

It should be safe to assume that, for any element K, C(K) implies E(K) implies G(K). These will be one-way implications in some problems and equality relations in others, as discussed in https://github.com/idaholab/moose/issues/7355.

We can think of G and E as operators on sets of elements in the obvious way: G({K}) = {G(Ki) for all Ki in {K}). For a partition where a processor has a set of local elements {L}, we need the DistributedMesh to retain copies of all elements in G({L}), we need the DofMap send_list to include all non-local dof indices with support on elements in E({L}), and we need the DofMap sparsity pattern construction to include pairs of dof indices (as limited by the CouplingMap) on (Li,Cj) for each Li in {L} and Cj in C(Li).

In the long run, it may make sense for E to be a per-variable definition - e.g. if we have two boundaries which are periodic in only one variable, we will never need to evaluate the other variables.

We may want the user to omit from their implementations relations which we already have enough information to understand implicitly. For instance, K is obviously in C(K), so the user should never bother telling us so. We may have a PeriodicBoundary, a hanging node constraint equation, or a user-defined constraint equation which creates a dependency between two elements; if so then we don't need the user to also tell us about that relation. (We do probably want PeriodicBoundary to use the new API for G internally, to fix #977)

We should allow multiple functors to be set simultaneously, both for the user's convenience and to handle cases like the above where library code sets some relations but the user may want to set others. When we actually evaluate e.g. G(K) we will evaluate G1(K) union G2(K) union...

C, E, and G are all likely to change throughout the course of a simulation - e.g. as the mesh moves in a contact problem, some pairs elements will approach each other and other pairs of elements will retreat from each other. Evaluating these relations may thus require a global communication operation in many applications. We need to support this, but hopefully without forcing a global communication operation in applications where the relations either don't change or only change in obvious ways with AMR.

Evaluating C on a single element is going to be necessary in some cases (computing a sparsity pattern), but any of these relations may be much more efficient to evaluate when the evaluation is done in bulk. Because the library can't know a priori how to do the evalutions efficiently, we should require the user to do caching if/when appropriate.

The user is going to have to be able to cache something, so we should try to query something for which the cache stays valid as long as possible, so unique_id would be the obvious thing to query. However, we don't currently have the ability for O(1) element lookup by unique_id, unique_id is currently an optional feature, and as per #977 we do have a non-optional feature whose correctness on DistributedMesh is going to depend on this API. On the other hand, we do already have one capability, restarts on slit and/or overset meshes, which depends on unique_id; adding another capability like PeriodicBoundary-on-DistributedMesh which has such a dependency wouldn't be the end of the world, especially since the latter case could be easily tested for at runtime; throwing an error at simulation start is much nicer than bursting into tears when writing a restart file.

roystgnr commented 8 years ago

So, places where this API would need to be called:

MeshCommunication::delete_remote_elements would need to query G({L})

DofMap::add_neighbors_to_send_list would need to query E({L})

MeshCommunication::redistribute would, when processor Pi is sending a set {R} of elements to be redistributed to processor Pj, also want to send the associated ghost elements G({R}). In simulations where G is constant this will be enough to get DistributedMesh partitioning correct. In simulations where G can change we probably want to punt to the application to pick up any remaining elements which now need to be ghosted on Pj but which weren't previously semilocal on either Pi or Pj. I can't yet think of any way to do that in the library which wouldn't require significant all-to-all communication, whereas the application will have some a priori knowledge of what restrictions it can assume about its new ghosting requirements.

It would be really nice if MeshCommunication::gather_neighboring_elements would really be gather_ghosted_elements, but the gather_neighboring_elements precondition is so weak (remote neighbor links may still be NULL!) that we couldn't safely run a gather_ghosted_elements routine until after gather_neighboring_elements was done, so if we ever implement gather_ghosted_elements it should be a separate new routine.

SparsityPattern::Build::operator() would need to query C({L})

I think that's it?

Pinging @pbauman, because I just realized that he might be interested in this for his Fluid-Structure-Interaction problems, or at least might be able to point out cases where I've overlooked something that comes up in FSI.

roystgnr commented 8 years ago

Things we could refactor and/or fix using this API:

Once PeriodicBoundary can supply a functor for G, we can start using it safely on distributed meshes. However, with the design above, unless we can implement an efficient gather_ghosted_elements, we would have to have some gather_periodic_elements operation called when periodic boundaries are first added to an already-distributed mesh.

We can start any mesh with a default "GhostPointNeighbors" functor for G to retain the default behavior in MeshCommunication::redistribute(), but which would allow users to remove that functor if they didn't want to bother ghosting any elements which they don't need to evaluate.

We can start any system with a default EvaluateSideNeighbors functor for E to retain the standard behavior in DofMap::add_neighbors_to_send_list. It would be easy to make this recursive, though, so users could get multiple layers of ghosted elements by removing the default EvaluateSideNeighbors(1) functor and attaching a new EvaluateSideNeighbors(N) functor in its place.

For C the default functor list would be empty (i.e. implicitly C(Ki) == {Ki}) except in the use_coupled_neighbor_dofs()==true case, in which case C would be EvaluateSideNeighbors(1). In either case the user could override to enable wider stencils.

roystgnr commented 8 years ago

So what should be the API?

IMHO the only version of each functor should take a range of elements. We're going to reuse the C functors when evaluating E and G, so even though we could define a version of C which takes a single element, we shouldn't; we should just call it with a one-element range when necessary.

In the range case, we expect to do lots of duplicate insertions. std::unordered_set is probably what we want to return our results here. We could cut down on memory usage by allowing users to provide a predicate and ignoring results which match the predicate - e.g. when evaluating G({L}), a predicate of pid<Elem> could be used to ignore anything already in {L}. That would mean a virtual function call per attempted-insertion, though, so I doubt we'd save any CPU, so the added complication to save temporary memory usage isn't worth it.

The big question is how to handle variable-dependency. Users may only care about a subset of variables in distant evaluable elements, so we could imagine defining E_v(K) for each variable number v, in which case E(K) under our previous definition is the union of E_v(K) forall v, and this is what would be included in G(K) when deciding what to ghost, but our send_list would only include the subset of variables we need, so communication and memory use would be much reduced. Alternatively, we could define E'(K) to be a set of ordered pairs of elements and variable-number-sets, from which a consistent E(K) would be derived by ignoring the variable-number-sets; this is equivalent mathematically but implies very different APIs.

For C(K), there are similar issues: e.g. we may want some discontinuous variables to couple only within their own element but other discontinuous variables to couple in a DG/FVM way. This could extend to more than one variable index: i.e. a dof for variable v1 in element K1 would depend on a dof for variable v2 in element K2 iff K2 is in C_v1_v2(K1). This would induce a consistent E_v(K) = union of C_w_v(K) forall variable indices w. Again, the alternative here is for C'(K) to be a set of ordered pairs of elements and variable-number-pair-sets.

I think that what we want is to have the API require users to supply E' and C', and return variable-set-junk by pointer. That way, in the common case where you care about all variables and couple to all variables, all the user needs to supply for variable-number-sets and variable-number-pair-sets is a NULL (which as in other libMesh APIs will be interpreted and documented to mean the full set). In the common case where you want coupling to ghosted elements to match coupling within elements, you can reuse your DofMap::_coupling_matrix. Even in the less common cases, the user can store common variable-number-sets and variable-number-pair sets within their functor object itself, and if our API has the user return an unordered_map with keys==elements and values==pointers-to-variable-set-junk, then setting up a few of those matrices then setting lots of those pointers is cheap. We could even use a CouplingMatrix* for the value type. Implementations of G or variable-independent implementations of E' and C' could leave the map values set to NULL; variable-dependent implementations of E' could store consistent rows in O(N), and implementations of C' could use the full matrix.

roystgnr commented 8 years ago

Where do we keep these functors?

We want C functors to be reusable when code needs E, and those both can be stored in a DofMap and handed to its SparsityPattern::Build, so that's fine. But we also want C and E functors to be reusable when code needs G. Mesh code needs G. The Mesh doesn't currently know the DofMap exists, and in fact it might be associated with several DofMaps, and making it know about them is awful modularity, so we can't have Mesh request functors stored in DofMap. Do we keep all these functors in the Mesh, and DofMap queries them from there? We'd need three different containers to keep them sorted, and calling Mesh::add_coupling_functor() sure feels like awful modularity; Mesh shouldn't know about sparsity.

We could keep them in the DofMap and have DofMap also add them to a single ghosting_functor container in Mesh. If DofMap adds clones, then we can't easily handle deleting functor clones from Mesh when the corresponding originals are deleted from DofMap. If DofMap adds pointers, that works, but API consistency would also require users to add pointers, which would require users to manage functor memory, which is less nifty than the clone-based strategy pattern we use for functors elsewhere, but which seems to be the least awful alternative.

roystgnr commented 8 years ago

What should these functors return to identify elements?

In all the use cases above, the functor is not expected or needed to return currently-remote elements, so even Elem* is an option. For caching, the best value to cache is clearly unique_id() - then the cache will remain valid through repartitioning and will need only relatively minor updates after mesh refinement or movement.

When doing range evaluations we can cheaply do another loop over all elements, so testing unique_id() would be an option in the DistributedMesh and send_list code. But that would break our refactoring opportunities (each would become dependent on --enable-unique_id, which is acceptable for PeriodicBoundary on a distributed mesh but is not acceptable for, say, DG or KellyErrorEstimator). We'd also need to do lookups when building the sparsity pattern, where we don't want our existing loop over all elements to become two nested loops over all elements. So if we have to do unique_id->Elem* lookups of cached results anyway, we might as well leave that to the layer doing the caching.

What we'd end up using in the SparsityPattern::Build code is Elem*, so we might as well have the functors return Elem*, rather than return elem->id() (which might have already required a conversion from cached unique_id) and convert that to Elem* (which is O(log(N/N_proc)) on DistributedMesh).

That said, I ought to put the unique_id-based caching code into libMesh, while we're at it. I'm not sure if anyone but MOOSE will use it for a while, but the data structure ought to be unsorted_multimap<unique_id_type, unique_id_type>, and there's nothing MOOSE specific about that. That would encourage all users of extended ghosted elements to decide between "I can figure out my ghosting on the fly" vs "I want to figure out my ghosting ahead of time" based on CPU/memory/ease tradeoffs without putting our thumb on the scale by making caching less easy than it could be.

roystgnr commented 8 years ago

I think those six chapters are it for the design, but I'm already realizing errata.

Cache data structure should be unsorted_multimap<unique_id_type, std::pair<unique_id_type, CouplingMatrix*> > - it ought to be usable in variable-dependent ghosting cases too.

roystgnr commented 8 years ago

And a complication for the cache - we want it distributed, not serialized. We need each processor to store those entries whose key is a unique_id which belong to an element local to that processor. Since we've already figured out that a Mesh will hold pointers to all relevant ghosting functors, we could call a GhostingFunctor::redistribute() method from within MeshCommunication::redistribute() and call a GhostingFunctor::delete_remote_elements() method from within MeshCommunication::delete_remote_elements(), and thereby give caching functors an opportunity to handoff their caches.

roystgnr commented 8 years ago

And a detail I've left unsaid: anywhere I've been discussing an "element", what I really mean is an "active element". We'll continue to ghost inactive elements iff they have a semilocal descendant.

roystgnr commented 8 years ago

IMHO this solves the question of how we specify and handle the various types of ghosted elements, but it leaves us with problems when we want to query whether an element is ghosted in some way.

1: API breakage: For geometrically ghosted elements, anything which has access to both an Elem and its Mesh can use the Mesh's GhostingFunctor pointers. This means that we can have a MeshBase return iterators over geometrically semilocal elements, but the current Elem::is_semilocal() API is unsupportable.

2: Inefficiency: To determine an element's ghost status in O(1) we might try finding that element's would-be-ghosts, seeing if any of those are local, and assuming that the ghost relationship is symmetric. That in general will not be a well-founded assumption; imagine stencils which only query data "upstream", or boundary layer calculations which only query data away from the wall. So we can't even support an O(1) Elem::is_semilocal(const DofMap&, const Mesh&) type API, and we can't even generate an efficient semilocal iterator range without, say, batch-computing the range, and caching the result with a weak_ptr in the Mesh plus a shared_ptr in every iterator.

3: Weird API: We probably care more often about whether an element is evaluable than whether the element is geometrically ghosted, and that requires information which we're only putting in DofMap, and that means that any coupling or algebraic queries or iterator range APIs are going to need both a DofMap and Mesh. I guess this technically isn't a problem but it's not a typical libMesh design.

4: Design questions: Do we need variable-dependent queries and/or iterator ranges? How would we want those to work? One query to return true iff all specified variables are evaluable, another to return true iff some specified variables are evaluable? Do we need queries and/or iterator ranges for coupled ghosting specifically, or are algebraic ghosting and geometric ghosting good enough?

pbauman commented 8 years ago

Pinging @pbauman, because I just realized that he might be interested in this for his Fluid-Structure-Interaction problems, or at least might be able to point out cases where I've overlooked something that comes up in FSI.

Nothing stick outs. I, too, haven't been able to come up with a way to avoid the all-to-all (even in the context of my FSI problem) in the DistributedMesh case; in the SerialMesh case, you can at least loop over all (subdomain-restricted) elements and then cache only the ones that belong to the current processor . We ought to be able to pack everything up so you only need to do the all-to-all once, but still.

Do we need queries and/or iterator ranges for coupled ghosting specifically,

Yes. For example, I may have overlapping elements split across processors, but I need to access them and have their dofs included in the sparsity pattern. (I'm working on this code right now for my specific use case.)

roystgnr commented 8 years ago

With this new system you wouldn't need to augment the sparsity pattern - your provided C() would return the set of elements that you overlap, and the DofMap would construct the sparsity entries accordingly.

dknez commented 8 years ago

Just to be clear: We could eliminate all the "augment sparsity pattern" stuff in libMesh (e.g. the usage in misc_ex9)? That would be great, since it can be tricky to correctly augment the sparsity pattern.

roystgnr commented 8 years ago

Oh, I definitely don't plan to eliminate the augment interface; surely there are more things in heaven and earth than are dreamt of in my new API. But it looks like about half the code in augment_sparsity_on_interface.C could be eliminated if you're just required to specify coupling between elements rather than between dofs.

And in that particular example, where the mesh isn't moving and where we're reading from a serialized file, the new API should also allow us to start using ParallelMesh again with no new code.

dknez commented 8 years ago

Sounds great!

roystgnr commented 8 years ago

we can't even generate an efficient semilocal iterator range without, say, batch-computing the range, and caching the result with a weak_ptr in the Mesh plus a shared_ptr in every iterator.

Actually, forget the weak_ptr/shared_ptr stuff. That would require C++11 or boost, and it's not even the most efficient way to do things, because it would lead to duplicate precomputations when someone uses and then destructs those range iterators more than once without changing the mesh in between. What we should do is just store, in the Mesh for geometrically ghosted ranges or in the DofMap for algebraically ghosted ranges, a vector of elements in the range. The vector gets precomputed whenever it's uninitialized and anyone asks for a range, then it gets cleared whenever we prepare_for_use() on the Mesh or reinit() on the DofMap.

roystgnr commented 8 years ago

I'm not going to implement this now, but I do have an idea for doing a general gather_ghosted_elements without O(N_proc^2) communication. We can come up with an "a priori" partitioning of space, something as simple as taking the bounding box and dividing it into N_proc chunks, and then each processor whose local elements overlap chunk p communicate that fact to processor p, and each processor whose ghosting requirements might include chunk p request the list of potentally-candidate processors from processor p. Do similar for boundary ids and subdomain ids.

This only works if you can confidently say "all my ghost elements will either be in a certain area or will belong to a certain subdomain or boundary", though. An API like that wouldn't break for users who couldn't meet those requirements (they could simply say "my 'certain area' is the entire bounding box") but it would revert to all-to-all.

permcody commented 8 years ago

I think the algorithms should exist even if they are slow. We'll only call them when we need them and we can maintain the caches somewhere.

roystgnr commented 8 years ago

Would it be reasonable to have Elem::is_semilocal(const Mesh&) and Elem::is_evaluable(const DofMap&, const Mesh&)? I'm looking at keeping cached ranges in the Mesh and DofMap, and if we can refer to those then we can at least get O(log(N_ghosted)) rather than O(N_local) results.

roystgnr commented 8 years ago

You'd still want to consider keeping your own caches in MOOSE, though, so you could use unsorted_set and get O(1) queries.

permcody commented 8 years ago

For my use cases, absoluetely

permcody commented 8 years ago

1: API breakage:

No problem here

2: Inefficiency:

This is fine with me too.

3: Weird API:

I don't think it's all that weird. Evaluating a variable requires dofs and an equation system. It's already beyond the geometry at that point.

4: Design questions: Do we need variable-dependent queries and/or iterator ranges? How would we want those to work? One query to return true iff all specified variables are evaluable, another to return true iff some specified variables are evaluable? Do we need queries and/or iterator ranges for coupled ghosting specifically, or are algebraic ghosting and geometric ghosting good enough?

If we are going to only "ghost" some variables and not others then we certainly need to be able to query that information. I'm not super picky about the interface as long as it works for each variable. The bottom line is we need the ability to ask about whether we can access a DOF before we actually access the DOF.

roystgnr commented 6 years ago

Closing this; IMHO it's resolved by the ghosting functor APIs.

There's definitely still work we can do in this vein:

But if those become priorities we can make new tickets.

dschwen commented 6 years ago

@roystgnr can I get a periodic boundary to be ghosted now?

https://github.com/libMesh/libmesh/blob/master/include/base/ghosting_functor.h#L75-L76

mentions a "PeriodicGhostingFunctor", but that is nowhere to be found in libmesh. The default ghosting does not seem to add ghost elements across periodic boundaries.

Edit: Let me double check. The ghosted neighbors might just not be where I expect them in space...

Edit2: Nope. I'm running a MOOSE problem with a 2x2x2 GridPartitioner, the ghosted elements (iterated over by mesh.ghost_elements_begin()/..end()) only appear at the internal processor boundaries, not at the outer periodic side sets of the domain. What am I missing?

roystgnr commented 6 years ago

Are you adding the PeriodicBoundaries before the mesh gets distributed? Doing that properly required modifying two of our own examples, and last I recall the same problem was an issue in MOOSE.

We were going to postpone mesh distribution as a workaround; disable delete remote elements at mesh creation, then reenable it and call it as a later action, but I don't recall anyone implementing that. I can do it myself but there'll be a bit of overhead while I reacquaint myself with the action system.

dschwen commented 6 years ago

The ghost element set I iterate over looks the same whether I use replicated or distributed mesh.

dschwen commented 6 years ago

I'm still puzzled by the doc comment I linked to above "although a PeriodicBoundary use will cause the library to create its own PeriodicGhostingFunctor for internal use". I cannot find such a PeriodicGhostingFunctor (perhaps the comment is outdated and that functionality is implemented elsewhere).

roystgnr commented 6 years ago

Ah, I get it now. The ghost_elements_foo() problem is a fault of an atavistic method and iterator using it, not of the actual ghosting. They basically loop over elements-that-would-have-been-ghosted before we had any control over ghosting, and I need to deprecate them in the comments and libmesh_deprecated() them in the code.

If you want to iterate over all geometrically ghosted elements, the way to do that is to iterate over all elements - "geometrically ghosted" just means "we don't own it but we can't delete it".

If you want to iterate over all algebraically ghosted elements (where "we don't own it but we have data for all its dofs") then try mesh.evaluable_elements_begin()/end().

We don't currently have iterators over all coupled elements (where "we don't own it but we expect Jacobian terms connecting its dofs to dofs on an element we do own"), and I'm afraid there wouldn't be an efficient way to add them, so I'm hoping you don't need them.

That doc is an atavism; I ended up adding periodic boundary support to DefaultCoupling. I'll fix it, thanks for catching it and sorry about the mistake.

permcody commented 6 years ago

Just to complete the discussion here. We have the ability in MOOSE to postpone deletion of remote elements until after distributing. It’s built-in to the “RelationshipManager” design and works properly with pre-splitting too. Sounds like we won’t need to do anything special here though. On Mon, Jul 16, 2018 at 4:56 PM roystgnr notifications@github.com wrote:

Ah, I get it now. The ghost_elements_foo() problem is a fault of an atavistic method and iterator using it, not of the actual ghosting. They basically loop over elements-that-would-have-been-ghosted before we had any control over ghosting, and I need to deprecate them in the comments and libmesh_deprecated() them in the code.

If you want to iterate over all geometrically ghosted elements, the way to do that is to iterate over all elements - "geometrically ghosted" just means "we don't own it but we can't delete it".

If you want to iterate over all algebraically ghosted elements (where "we don't own it but we have data for all its dofs") then try mesh.evaluable_elements_begin()/end().

We don't currently have iterators over all coupled elements (where "we don't own it but we expect Jacobian terms connecting its dofs to dofs on an element we do own"), and I'm afraid there wouldn't be an efficient way to add them, so I'm hoping you don't need them.

That doc is an atavism; I ended up adding periodic boundary support to DefaultCoupling. I'll fix it, thanks for catching it and sorry about the mistake.

— You are receiving this because you commented.

Reply to this email directly, view it on GitHub https://github.com/libMesh/libmesh/issues/1028#issuecomment-405379232, or mute the thread https://github.com/notifications/unsubscribe-auth/AC5XIHTWLUTZd6xjSryFu1i66x5MHiIxks5uHP3rgaJpZM4JPFDo .

dschwen commented 6 years ago

Ok, thanks. So my scenario would be iterating over all elements. What I conceptually want is to iterate over all point neighbor elements of the current processor including point neighbors across PBCs. In a large scale parallel all elements could be a huge set, and re-performing point neighbor test for each element sounds expensive and redundant (because the ghosting functor should have already performed that job). I still seem to be missing a key piece here.

roystgnr commented 6 years ago

Well, bear in mind that we don't actually ghost point neighbors anymore by default, only when directed to by the user. IIRC I added PointNeighborCoupling at Derek's request so I'd assume it's got a snazzy MOOSE interface by now but it's not the default in libMesh and I don't think he was going to make it the default in MOOSE either.

But if you are setting a PointNeighborCoupling object as an algebraic ghosting functor, then looping over evaluable elements sounds like it's probably what you want.

EXCEPT:

It's actually possible to have an evaluable element unintentionally. Imagine a cartesian mesh with 5x3 quads:

PPEPP
LPPPL
LLLLL

Where L indicates a local element, and P is a point neighbor. Element E isn't a point neighbor of a local element, but if you're only using C1 first-order basis functions for every variable, E is an evaluable element, because all of its DoFs are shared with algebraically ghosted elements. If E is geometrically ghosted (whether because you're using a DistributedMesh with a wider geometric ghosting set or you're using a ReplicatedMesh) then E will show up whenever you try to loop over evaluable elements, despite not being a point neighbor. Depending on what you're looping to accomplish, this might be a non-issue or it might force you to construct point neighbor sets on every evaluable element anyway.

dschwen commented 6 years ago

The Evaluable multi predicate adds

    this->_predicates.push_back(new pid<T>(my_pid));

Which is exactly what I do not want. It seems there is not out of the box solution. This reply is just to record this (correct me if I'm wrong). This should not turn into a personal support thread. I'll open an issue if I work this out.

roystgnr commented 6 years ago

That... looks like a major, ridiculous bug in Evaluable. I'm about to be AFK but I'll look at it again tonight.

dschwen commented 6 years ago

Ok, if I change this to

this->_predicates.push_back(new not_pid<T>(my_pid));

I almost get the elements I want (except I only get face neighbors even if I params.registerRelationshipManagers("ElementPointNeighbors"); in my object) :-/

roystgnr commented 6 years ago

In addition to the change you've made (which isn't appropriate for Evaluable but which is exactly the behavior I should put into ghost_elements_* rather than just deprecating that, thank you), could you try:

Change ElementPointNeighbors.h to make it inherit from AlgebraicRelationshipManager instead of GeometricRelationshipManager, and change ElementPointNeighbors.C to use attachAlgebraicFunctorHelper instead of attachGeometricFunctorHelper?

That's not a good permanent change either, since there may be other users who really do want only geometrically and not algebraically ghosted point neighbors, but if you're in the latter case then MOOSE ought to get a new RelationshipManager to handle the latter case, and we can test the idea quickly by tweaking the former case.

permcody commented 6 years ago

I'll be happy to help when I get back with the RelationshipManager side of things. I'm on travel through Thursday. Roy's quick hack should work, but we'll want to make a Geometric and Algebraic version of this for the reasons outlined.

On Tue, Jul 17, 2018 at 9:22 AM roystgnr notifications@github.com wrote:

In addition to the change you've made (which isn't appropriate for Evaluable but which is exactly the behavior I should put into ghostelements* rather than just deprecating that, thank you), could you try:

Change ElementPointNeighbors.h to make it inherit from AlgebraicRelationshipManager instead of GeometricRelationshipManager, and change ElementPointNeighbors.C to use attachAlgebraicFunctorHelper instead of attachGeometricFunctorHelper?

That's not a good permanent change either, since there may be other users who really do want only geometrically and not algebraically ghosted point neighbors, but if you're in the latter case then MOOSE ought to get a new RelationshipManager to handle the latter case, and we can test the idea quickly by tweaking the former case.

— You are receiving this because you commented.

Reply to this email directly, view it on GitHub https://github.com/libMesh/libmesh/issues/1028#issuecomment-405579204, or mute the thread https://github.com/notifications/unsubscribe-auth/AC5XIP-IlxAUCAx1R8Xf568zwhnTtGDVks5uHeS0gaJpZM4JPFDo .

dschwen commented 6 years ago

(what I actually did was add a new iterator not_local_evaluable_elements)

I made the suggested change to ElementPointNeighbors and it results in a segfault and in dbg:

Assertion `_problem' failed
Problem pointer is NULL
at /Users/schwd/Programs/moose/framework/src/relationshipmanagers/AlgebraicRelationshipManager.C, line 35

No surprise, as it gets initialized to nullptr. Changing the initialization to

_problem(parameters.getCheckedPointerParam<FEProblemBase *>("_fe_problem_base"))

leads to a different segfault because the non linear system might not be set up yet (?).

On a different note. If I switch to replicated mesh the RelationshipManagers does not get added at all (the point coupling is only set if (_app.isSplitMesh() || _mesh.isDistributedMesh())). In those cases I don't get point neighbors either with the above modified ghost element iterator :-(

roystgnr commented 6 years ago

That final behavior is a bug too. Adding geometric ghosting functors to a ReplicatedMesh may be meaningless (which IMHO means "add them anyway" just to keep the code as simple as possible, since there's no real performance penalty, but whatever), but algebraic ghosting functors or coupling functors should certainly still be added to a ReplicatedMesh too.

dschwen commented 6 years ago

The "bug" is not in MOOSE that way. It is a holdover form ElementPointNeighbors being a GeometricRM. I can simply remove that if in the modified version. However this big issue is still that the _problem pointer is not being fed a meaningful value.

My changes are at https://github.com/dschwen/moose/commit/900f9e105

MOOSE passes this is as parameters (the 0xbebebebebebebebe pointers are obviously bogus)

Name     Type    Value
---------------------
 _aux_sys        AuxiliarySystem*        0xbebebebebebebebe
 _eigen_problem  EigenProblem*   0xbebebebebebebebe
 _executioner    Transient*      0xbebebebebebebebe
 _fe_problem     FEProblem*      0xbebebebebebebebe
 _fe_problem_base        FEProblemBase*  0xbebebebebebebebe
 _moose_app      MooseApp*       0x61e000000c98
 _moose_base     std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >   RelationshipManager
 _nl_sys         SystemBase*     0xbebebebebebebebe
 _object_name    std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >   green1_ElementPointNeighbors_rm
 _subproblem     SubProblem*     0xbebebebebebebebe
 _sys    SystemBase*     0xbebebebebebebebe
 _tid    unsigned int    0
 attach_geometric_early  bool    1
 control_tags    std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >   UserObjects
 enable  bool    1
 mesh    MooseMesh*      0x61b000007e98
 rm_type         Moose::RelationshipManagerType  ERROR
permcody commented 6 years ago

I'm happy to take a look at the MOOSE side when I get back.

On Tue, Jul 17, 2018 at 12:09 PM Daniel Schwen notifications@github.com wrote:

The "bug" is not in MOOSE that way. It is a holdover form ElementPointNeighbors being a GeometricRM. I can simply remove that if in the modified version. However this big issue is still that the _problem pointer is not being fed a meaningful value.

My changes are at dschwen/moose@900f9e1 https://github.com/dschwen/moose/commit/900f9e105

MOOSE passes this is as parameters (the 0xbebebebebebebebe pointers are obviously bogus)

Name Type Value

_aux_sys AuxiliarySystem 0xbebebebebebebebe _eigen_problem EigenProblem 0xbebebebebebebebe _executioner Transient 0xbebebebebebebebe _fe_problem FEProblem 0xbebebebebebebebe _fe_problem_base FEProblemBase 0xbebebebebebebebe _moose_app MooseApp 0x61e000000c98 _moose_base std::1::basic_string<char, std::__1::char_traits, std::1::allocator > RelationshipManager _nl_sys SystemBase 0xbebebebebebebebe _object_name std::1::basic_string<char, std::__1::char_traits, std::1::allocator > green1_ElementPointNeighbors_rm _subproblem SubProblem 0xbebebebebebebebe _sys SystemBase 0xbebebebebebebebe _tid unsigned int 0 attach_geometric_early bool 1 control_tags std::1::vector<std::1::basic_string<char, std::1::char_traits, std::1::allocator >, std::1::allocator<std::__1::basic_string<char, std::1::char_traits, std::__1::allocator > > > UserObjects enable bool 1 mesh MooseMesh 0x61b000007e98 rm_type Moose::RelationshipManagerType ERROR

— You are receiving this because you commented.

Reply to this email directly, view it on GitHub https://github.com/libMesh/libmesh/issues/1028#issuecomment-405638101, or mute the thread https://github.com/notifications/unsubscribe-auth/AC5XIGxFSfpYRolqyb_fQmOwbcEsSPcrks5uHgwdgaJpZM4JPFDo .