Open lindsayad opened 11 months ago
@roystgnr I kind of shot on the hip on this so please edit anything that is technically inaccurate!
This is right on the money for the description of the problem, and I think we're moving in the right direction for the solution; the only thing I'd note is that @GiudGiud pointed out the possibility of using a single subdomain for every junction NodeElem
- using that idea, we wouldn't run into whatever high-subdomain-count performance issues MOOSE is having, plus we'd only have to define a single variable to get independent values at every junction. That sounds ideal, assuming every junction needs an identical (or at least identical-cardinality) set of variables.
The one hiccup I fear (being completely ignorant of the application code and physics here) is that, depending on how complicated the junction contributions to the residual are, coding the kernels might be more complicated with a NodeElem
instead of a SCALAR
. E.g. if you have 3 Edge
elements meeting at a single node, right now it's easy to write terms on each incorporating both the Edge
LAGRANGE
values and the global SCALAR
values, but if you move to a NodeElem
, suddenly only that NodeElem
will have handy access to the junction variables, and it won't have access to all the edge variables, it'll only have access to their shared value at the shared node. That can't be enough, can it? We surely need at least the incoming gradients along each edge?
There is an easy fix (in the Edge
kernel call dof_number()
on the junction Node
, pull out the junction variable's indices from the Node
and then their values from the solution NumericVector
by hand), but it's an ugly fix, definitely something that should at least be well encapsulated rather than ever typed out twice.
I believe as it stands now the junctions are geometrically disconnected from the channels that they conceptually join but I could very well be wrong about that. @joshuahansel @licharlot ?
If there's a geometric disconnect then we've got to use a hack anyway, so no sweat about the hack maybe being slightly uglier ... but something else has occurred to me: we also need to hand something to DofMap::add_coupling_functor()
to ensure that the NodeElem
DoFs are coupled to everything they need to be coupled to. Even if we evaluate a residual on a channel element by manually dragging a value from a NodeElem
, we're not going to have an automatically added matrix sparsity entry for everything in the Jacobian of that. If the NodeElem
shared a node with the incoming channels then we'd get sparsity entries for derivatives between all the DoFs at the junction node, but there'd be no entry for any connection between the NodeElem
-specific DoFs and the channel DoFs at edge nodes away from the junction unless we add it ... and if there's a geometric disconnect then not only do we need a coupling functor for the distant connections, we need one just to make sure that ghosted vectors and distributed meshes have everything they need.
Yes we'll need to make sure to write the proper GhostingFunctor/RelationshipManager
for this
the junctions are geometrically disconnected from the channels that they conceptually join
Right, they are not necessarily at the end of a channel for instance - they may be anywhere in space
This wasn't meant to be closed yet.
Reason
Scalar variables as libMesh considers them are global variables (well they can be sudomain restricted) that should introduce a dense row and a dense column in the system matrix. Take for instance setting a mean zero pressure in a cavity problem to remove the nullspace: the scalar Lagrange multiplier variable residual is a function of every pressure dof in the cavity and every pressure residual is a function of the Lagrange multiplier dof. libMesh constructs its sparsity pattern anticipating a coupling structure like this, which can be unnecessarily expensive if the coupling structure is not that way.
Design
Consider an alternative like a Lagrange variable restricted to a
NodeElem
associated with the junctionImpact
Potentially much faster sparsity build times. See https://github.com/idaholab/moose/discussions/25555 for the impetus of this issue