idaholab / moose

Multiphysics Object Oriented Simulation Environment
https://www.mooseframework.org
GNU Lesser General Public License v2.1
1.71k stars 1.04k forks source link

New mortar implementation #13080

Closed lindsayad closed 5 months ago

lindsayad commented 5 years ago

Reason

There is some doubt as to whether the current mortar formulation is conservative. Moreover, according to @andrsd the current implementation does not work in parallel nor is thread-safe.

Design

The new mortar implementation will be conservative, work in parallel, work with distributed mesh via the new RelationshipManager system, and be capable of on-the-fly mortar mesh generation.

Impact

Allow mortar to be used in applications such as mechanical and thermal contact for BISON.

Road Map

tophmatthews commented 5 years ago

Speaking from ignorance...will this naturally handle cases of gap to contact to pull away? (no contact -> mechanical contact -> no contact)

lindsayad commented 5 years ago

Absolutely. This is handled through the definition of the residual, and as already demonstrated in MechanicalContactConstraint there are many different formulations for handling this, e.g. penalty, augmented lagrange, etc. In conjunction with mortar, a Lagrange Multiplier formulation is most commonly used. This would be the approach I would prefer. The (normal) lagrange multiplier would represent the normal contact force. I would formulate the Lagrange Multiplier residual in the yet to be created MortarMechanicalContactConstraint as a nonlinear-complementary problem, which is a fancy name for applying Karush-Kuhn-Tucker conditions. The residual would look like something like this:

template <ComputeStage compute_stage>
ADResidual
MortarMechanicalContactConstraint<compute_stage>::computeQpResidual()
{
  return _test[_i][_qp] * (_gap_distance[_qp] + _lambda[_qp] - std::sqrt(_gap_distance[_qp] * _gap_distance[_qp] + _lambda[_qp] * _lambda[_qp]));
}

This residual enforces the constraint that either the gap distance or contact force is always zero and that either quantity is always >= 0.

lindsayad commented 5 years ago

@andrsd From what I can gather looking at your journal files (best example is here) for the "old" mortar method, it looks like you manually generated what @jwpeterson refers to as mortar segments. E.g. you manually laid down a node corresponding to each master and slave node projection along the interface, and then manually laid down edges, etc. The effect of this is that each integral along a mortar segment has at most contributions from one slave and one master element. This is how @jwpeterson's automatic mortar segment generation is designed.

There is a slight difference in that for the "old" mortar method, the number of lagrange multiplier DOFs is equal to the number of nodes for the "mortar segment mesh", whereas for the "new" method the number of DOFs is only equal to the number of nodes on the "slave interface". Thus the number of LM DOFs for the "old" case will always be larger than for the "new" case. But because we handle integration carefully in the "new" method, I believe we maintain the same integral conservation.

I ran a convergence study for diffusion with solution continuity at the interface and a body force completing the method of manufactured solutions. The L2 norm of the primal variable error is essentially identical between the old and new mortar methods. See this figure where the x-axis represents log(h) and the y-axis represents log(error):

old-vs-new-mortar-method-convergence

Both methods converge the primal variable solution at the optimal rate for this P2-P1 discretization. My conclusion is that mathematically speaking, the old method is perfectly sound!

I guess perhaps the greatest motivation to me for continuing forward with the new method is that it's capable of generating the mortar segment meshes on the fly. This will be critical for displaced problems where the pairing of master and slave elements will change over the course of the simulation, meaning that the mortar segment mesh will have to be dynamically re-generated. @andrsd @jwpeterson any thoughts on this?

jwpeterson commented 5 years ago

@andrsd @jwpeterson any thoughts on this?

I would be curious to see at what rate the flux converged for you. In my testing, the primal variables always converged optimally but the convergence rate of the flux could vary quite a bit. For a P2-P1 discretization, I think theoretically you should be able to obtain O(h^2) convergence in the flux variable (measured in the L2-norm, there may be a section on a priori error estimates in the Tech Report that discusses this a bit). Of course it all depends on how accurate one really needs the flux to be... if it represents a physical quantity then you'd like to be sure it's accurate, but if it's just an artificial variable introduced to enforce continuity then as long as the primal variables are unaffected you don't really care I guess.

A related question is whether it's possible for your implementation to use piecewise constants (CONST, MONOMIALS) for the flux. I generally found that P1-P0 was more stable and accurate, but I'm not sure if that would hold for P2-P0, or if in that case P2-P1 is superior.

Was there any difference between h_master and h_slave in the test you tried? In my testing I found that this could have a big influence on the flux solution and whether or not any unstable modes were triggered. It also made a fairly big difference whether the slave side was coarser than the master side or vice-versa. I think the former usually ended up being more accurate.

lindsayad commented 5 years ago

I would be curious to see at what rate the flux converged for you.

Well in this test, it appears not to converge at all! old-vs-new-mortar-method-convergence

I guess I don't understand how there seems to be no feedback from the lack of convergence of the flux solution into the convergence rate of the primal solution since the lagrange multiplier variable values will feed into the primal residuals along the interface.

A related question is whether it's possible for your implementation to use piecewise constants (CONST, MONOMIALS) for the flux. I generally found that P1-P0 was more stable and accurate, but I'm not sure if that would hold for P2-P0, or if in that case P2-P1 is superior.

I'm assuming that you used stabilization for those convergence studies? Because based on your report and the LBB condition those should be fundamentally unstable right because h_lambda = h_primal? In my convergence study here I have not used any stabilization.

jwpeterson commented 5 years ago

Well in this test, it appears not to converge at all!

Something's definitely wrong there... maybe the wrong reference value? lambda = -k grad(u).n so maybe a sign error or something.

I guess I don't understand how there seems to be no feedback from the lack of convergence of the flux solution into the convergence rate of the primal solution since the lagrange multiplier variable values will feed into the primal residuals along the interface.

In this particular case I would guess the flux really is converging and there's just a bug in the computation of the error? In general, the only thing the a priori estimates predict is the error in the various combined norms, so there is wiggle room for the flux variable to converge at a higher rate and we do see this (e.g. O(h^1.5) for constant monomials) in some problems.

I'm assuming that you used stabilization for those convergence studies? Because based on your report and the LBB condition those should be fundamentally unstable right because h_lambda = h_primal? In my convergence study here I have not used any stabilization.

Yes, h_lambda = h_primal always because we aren't introducing a special mesh to discretize the Lagrange multiplier variable, it is discretized directly on the slave side. And technically all the Pn-Pm formulations, for any (m,n), are unstable because of this, although some seem to be worse than others.

lindsayad commented 5 years ago

Yes, h_lambda = h_primal always because we aren't introducing a special mesh to discretize the Lagrange multiplier variable, it is discretized directly on the slave side. And technically all the Pn-Pm formulations, for any (m,n), are unstable because of this, although some seem to be worse than others.

That's really interesting that the stability condition depends essentially entirely on the mesh.

I'm going to look at what's going on in the error computation...

jwpeterson commented 5 years ago

Another reason to look into adding the stabilization is the iterative solver performance. Under mesh refinement I found that the number of GMRES iterations was much higher in the unstabilized cases and often varied erratically, while with stabilization the iterative solvers performed much more predictably. Again, probably due to the zero diagonal block in the primal equation...

lindsayad commented 5 years ago

Sure enough I had an error in my parsed expression for the exact lambda solution 🙄 Looks like super convergence for lambda here. Interestingly the convergence rate is the same between old and new but the new method appears to have a lower "C" constant. old-vs-new-mortar-method-convergence

jwpeterson commented 5 years ago

Cool! These results are in agreement with the theory that predicts O(h^(p+3/2)) for the flux, since p=1 in your case and you see approx. O(h^2.5) convergence. I'm not sure what would cause a smaller error constant for the old vs. new method, looks like the error is almost a factor of 10 lower (assuming these are plots of log10(error)), which is better than the other way around :smiley: Perhaps the two methods are not exactly equivalent after all.

jwpeterson commented 5 years ago

BTW if you plot the flux you should see that it has node-to-node oscillations and this demonstrates the instability, but the amplitude decays under refinement and it doesn't affect the rate of convergence in L2. If you add stabilization to this case I would still expect a convergence rate of O(h^2.5) but an even better error constant than either of these two methods.

lindsayad commented 5 years ago

Some more plots

P2P0:

old-vs-new-mortar-method-convergence-p2p0

P1P1:

old-vs-new-mortar-method-convergence-p1p1

P1P0:

old-vs-new-mortar-method-convergence-p1p0

lindsayad commented 5 years ago

In general, the only thing the a priori estimates predict is the error in the various combined norms, so there is wiggle room for the flux variable to converge at a higher rate and we do see this (e.g. O(h^1.5) for constant monomials) in some problems.

I guess it also leaves wiggles room for the flux variable to converge at a lower rate? I'm referring to the P2P0 case where lambda only converges at a 1st order rate in the L2 norm. This is worse than the convergence rate for P1P0!

I need some help with equation 67 in your technical report @jwpeterson. With the couple times a year I look at Sobolev spaces, there are still many things I don't understand. How can I try to compute a H^(k+1) norm of 'u' (with k >= 1) when I've restricted my trial functions that compose u to the H^1 space? IIRC the exponent indicates the number of weak derivatives we must be able to take in order to include a function in that space.

lindsayad commented 5 years ago

Perhaps the two methods are not exactly equivalent after all.

Given the P1P1 and P1P0 results above, I think we can definitely say the methods are not equivalent and that overall the new method appears superior...

jwpeterson commented 5 years ago

I think we can definitely say the methods are not equivalent and that overall the new method appears superior...

Agreed, below are some comments on the plots:

P2P0: I would expect lamabda to converge at O(h^1.5) in this case; I'm not sure why it doesn't. Technically this disagress with the a priori estimates, but I 'm not sure how big a deal that is... it might be worth looking into if you specifically cared about it, but I'd say overall these results seem fine, e.g. it's good that the primal variable still converges at the optimal (cubic) rate.

P1P1: In this case the primal variable should converge at O(h^2) so the "new y" case looks correct to me. "old y" appears to be converging at a higher rate but I don't think that's correct. Also the error is higher so I think something in the old mortar implementation might not be quite right in this case. We also see O(h^1.5) superconvergence for the new approach to computing the flux but not O(h^2.5) as I believe it is limited by the order of the primal variable. This result is consistent with the theory. I'd say overall the new approach is a clear winner for this case.

P1P0: It seems like something is definitely wrong with the old formulation in this case, it's possible it just has a bug because the flux doesn't converge at all! The new formulation's flux is again superconvergent and in my opinion this formulation gives you the most bang for your buck, since you achieve high-order convergence for the flux using the cheapest possible CONSTANT, MONOMIAL discretization. I also found that this one was generally more stable than P1P1 in my testing.

jwpeterson commented 5 years ago

I need some help with equation 67 in your technical report @jwpeterson.

The norms on the rhs of (67) are norms of the true solution "u" and "true" flux lambda, so you don't really need to compute them. The idea is that the true solution u must be in at least H^2 if you want the finite element solution to converge to it at O(h) in the combined norm. If the true solution u has some singularity and thus is not in H^2, you won't get the "optimal" rate of convergence.

Another way to think of it is: the smoother the true solution is, the higher-order polynomials you can use to approximate it. For example if u was in H^3, then the combined norm can converge at O(h^2) if you use quadratic elements (this is the case you tested). Since u is probably in C^{\infty} for the test problem you are solving, you could effectively use any p and get O(h^p) convergence in this norm. Of course you are limited to 2 if you are using Lagrange elements in libmesh :)

lindsayad commented 5 years ago

The norms on the rhs of (67) are norms of the true solution "u" and "true" flux lambda, so you don't really need to compute them. The idea is that the true solution u must be in at least H^2 if you want the finite element solution to converge to it at O(h) in the combined norm. If the true solution u has some singularity and thus is not in H^2, you won't get the "optimal" rate of convergence.

This was very helpful, thanks.

Technically this disagress with the a priori estimates,

Still trying to work on my understanding here. Does it really disagree? Doesn't equation 67 simply say the combined norm will be bounded from above by the RHS? So if I use the mesh-dependent norm of equation 66 without any stabilization, then I just have the H1 norm of the primal variable. For the P2P0 case, if I plot the convergence of the H1 norm of u for a certain sequences of h's, I get quadratic convergence.

Now I've actually gone ahead and computed the H3 and H1 norms of the primal and lagrange multipler exact solutions. With that data I plotted the RHS of equation 67 as a function of h: image So for sufficiently large values of h, the upper bound in the error tracks with the first term, e.g. the h^2 term, while for sufficiently small values of h, the upper bound in the error will track with the h^1.5 term. No surprise there...I could have just taken a critical (key word there) look at the equation to see that.

I computed the smallest h size for the most refined mesh and it is equal to about 5e-3 which from the plot suggests I should be approaching the transition region from h^2 convergence to h^1.5 convergence. So maybe if I refine a bit more I can see that...

jwpeterson commented 5 years ago

We expect O(h^1.5) convergence in the combined norm in the P2P0 case as h->0 (see Table 1 in the report). From your plot I thought it looked like you were going to get O(h) convergence since the flux alone was converging at O(h)...

jwpeterson commented 5 years ago

Actually... I think I take back my comments about P2P0. If you are not using stabilization then I think the estimate (67) simply doesn't apply, since (66) simply isn't a relevant norm when delta=0. The only other estimate that would be close is (65) but it only applies to equal-order approximations. So it's entirely possible that your results for P2P0 are correct/don't indicate a bug. There may be some other reason we don't obtain a "superconvergent" (extra half power of h) flux in this case, but it just isn't covered by these estimates. Sorry for any confusion...

lindsayad commented 5 years ago

Fair enough. I think I'll end my numerical experimentation then. Based on the overall results, I think the un-stabilized is probably programmed correctly. I'm sure I'll add back the stabilization that you put in at some point, but the reality is that for most of the physics I'm interested in there will be on-diagonal contributions from lambda.

lindsayad commented 5 years ago

I have to say that mortar is pretty cool. Here's with our new system. Dirichlet 1 on the left; mortar equal value on the right. Diffusion with a uniform source. The mesh is slightly finer on the RHS:

periodic-mortar-2d-color periodic-mortar-1d-x-axis

lindsayad commented 5 years ago

I've got a conundrum...I often need to reinit three elements at a time when calculating the Lagrange Multiplier residual for example: the lower dimensional slave element that the LM variable lives on, the higher dimensional slave interior parent that the primal variable lives on, and finally the higher dimensional master interior parent that the primal variable also lives on. I can do this manually in the MortarConstraint class like I do now, where I conduct the element looping and call reinit directly on the FEBase objects I'm using (LM, slave primal, master primal). However, when I start adding multiple constraints on one mortar interface, for example for mechanical contact where I need to apply the contact force to both x and y displacement variables, then it is a waste to do the element loop for each constraint. I would instead prefer to do FE and variable object reinit'ing through our Assembly and MooseVariableFE objects like is done for our other constraints.

The problem with doing this is that with the given structure of Assembly and MooseVariableFE I can't do this for multiple elements. If I did not have the LM variable I could get by using Assembly::reinit and Assembly::reinitNeighbor which will build for me two different variable solutions, e.g. I could build the primal slave and primal master local solutions simultaneously. But the LM variable throws a wrench in the works....

Let's say I reinit on the lower dimensional element first to build the LM solution: this will set _current_element in Assembly and correspondingly set _element in MooseVariableFE. Then when we prepare the variable dof indices, we'll get the right thing for the LM variable but not for the primal variable that lives on the interior. And visa versa...if we reinit on the higher dimensional element, we'll have the right thing for the primal variable but not for the LM variable. When the dof indices are wrong then the sizes of the local residuals and Jacobians will both also be wrong and the locally constructed variable solutions will be wrong. In the worst case scenario perhaps we reinit the interior element last and then the LM dof indices and local solution sizes are zero!

So what's the right solution here? Add another element, e.g. _current_elem, _current_neighbor_elem, and _current_lower_d_elem? I don't really like that...we already essentially have duplicate code to handle computing element and neighbor...I don't really want to add 50% more code that will look the same. Of course maybe all this code could maybe get combined into a single method. I can continue to do element looping and direct FE reinit'ing in my MortarConstraints and just not worry about the redundant work, but that doesn't seem that smart!

Maybe the big picture man @friedmud has an idea?

friedmud commented 5 years ago

My first thought is really along the lines of what you already brought up: to provide a reinit() that reinits all three.  I'm not worried too much about the code duplication - we can always fix that later.

I'll continue to think about it...

Derek On Apr 3, 2019, 8:45 AM -0600, Alex Lindsay notifications@github.com, wrote:

I've got a conundrum...I often need to reinit three elements at a time when calculating the Lagrange Multiplier residual for example: the lower dimensional slave element that the LM variable lives on, the higher dimensional slave interior parent that the primal variable lives on, and finally the higher dimensional master interior parent that the primal variable also lives on. I can do this manually in the MortarConstraint class like I do now, where I conduct the element looping and call reinit directly on the FEBase objects I'm using (LM, slave primal, master primal). However, when I start adding multiple constraints on one mortar interface, for example for mechanical contact where I need to apply the contact force to both x and y displacement variables, then it is a waste to do the element loop for each constraint. I would instead prefer to do FE and variable object reinit'ing through our Assembly and MooseVariableFE objects like is done for our other constraints. The problem with doing this is that with the given structure of Assembly and MooseVariableFE I can't do this for multiple elements. If I did not have the LM variable I could get by using Assembly::reinit and Assembly::reinitNeighbor which will build for me two different variable solutions, e.g. I could build the primal slave and primal master local solutions simultaneously. But the LM variable throws a wrench in the works.... Let's say I reinit on the lower dimensional element first to build the LM solution: this will set _current_element in Assembly and correspondingly set _element in MooseVariableFE. Then when we prepare the variable dof indices, we'll get the right thing for the LM variable but not for the primal variable that lives on the interior. And visa versa...if we reinit on the higher dimensional element, we'll have the right thing for the primal variable but not for the LM variable. When the dof indices are wrong then the sizes of the local residuals and Jacobians will both also be wrong and the locally constructed variable solutions will be wrong. In the worst case scenario perhaps we reinit the interior element last and then the LM dof indices and local solution sizes are zero! So what's the right solution here? Add another element, e.g. _current_elem, _current_neighbor_elem, and _current_lower_d_elem? I don't really like that...we already essentially have duplicate code to handle computing element and neighbor...I don't really want to add 50% more code that will look the same. Of course maybe all this code could maybe get combined into a single method. I can continue to do element looping and direct FE reinit'ing in my MortarConstraints and just not worry about the redundant work, but that doesn't seem that smart! Maybe the big picture man @friedmud has an idea? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

lindsayad commented 5 years ago

@jwpeterson Have you read about folks using "mortar" to add flux contributions (e.g. to the primal variable residuals) while enforcing the constraint equation at nodes? This paper and all subsequent papers from that group do that for mechanical contact. I've experimented with that in my mechanical contact tests with moose, and actually achieve slightly better non-linear solver convergence while also achieving zero penetration at the nodes (which makes sense obviously since the constraints are enforced at the nodes).

I'm curious how the convergence rate of my approximate solution would be (my "finite element' convergence rate as opposed to non-linear solver convergence performance) for say the solution continuity problem if I tried this kind of mixed strategy...

jwpeterson commented 5 years ago

I have not heard of this, but it sounds interesting. Do you think it would work well in MOOSE given what you know about the way Constraints are currently handled? Do you still need to form mortar segments in order to compute the primal variable residual contributions correctly? What is the resulting algebraic structure of the Jacobian matrix (e.g. is it still a saddle point problem)?

I'm curious how the convergence rate of my approximate solution would be

I doubt there'd be much affect on FE convergence rates, i.e. they would probably remain optimal. Do they not really address this issue in their papers? Maybe it is too complicated to prove this mathematically...

lindsayad commented 5 years ago

Below is a summary of some convergence/timing results I've obtained for different algorithms. The first two columns indicate whether the variable's residual is applied via Mortar or NodeFace constraints. The third column indicates the type of non-linear complementarity function I use to guarantee the enforcement of KKT conditions of mechanical contact.

Lagrange multiplier Displacement NCP function Time (arbitrary units) Time steps Nonlinear iterations
Nodal Mortar Min 14.401 40 104
Nodal Mortar FB 17.752 40 135
Nodal Nodal Min 5.438 41 104
Nodal Nodal FB 6.770 41 149
Mortar Mortar Min 14.454 40 106
Mortar Mortar FB 19.027 40 136

Notes:

 5 Nonlinear |R| = 4.007951e-04
    |residual|_2 of individual variables:
                  disp_x:    0.000399808
                  disp_y:    2.75599e-05
                  normal_lm: 5.52166e-06

The number of nodes in contact is 11

      0 Linear |R| = 4.007951e-04
      1 Linear |R| = 1.287307e-04
      2 Linear |R| = 8.423398e-06
      3 Linear |R| = 1.046825e-07
      4 Linear |R| = 8.017310e-09
      5 Linear |R| = 3.053040e-10
  Linear solve converged due to CONVERGED_RTOL iterations 5
 6 Nonlinear |R| = 4.432193e-04
    |residual|_2 of individual variables:
                  disp_x:    0.000396694
                  disp_y:    0.00019545
                  normal_lm: 2.96013e-05

The number of nodes in contact is 11

      0 Linear |R| = 4.432193e-04
      1 Linear |R| = 1.355935e-04
      2 Linear |R| = 1.216010e-05
      3 Linear |R| = 6.386952e-07
      4 Linear |R| = 2.235594e-08
      5 Linear |R| = 2.884193e-10
  Linear solve converged due to CONVERGED_RTOL iterations 5
 7 Nonlinear |R| = 4.008045e-04
    |residual|_2 of individual variables:
                  disp_x:    0.000399816
                  disp_y:    2.76329e-05
                  normal_lm: 5.29313e-06

The number of nodes in contact is 11

      0 Linear |R| = 4.008045e-04
      1 Linear |R| = 1.287272e-04
      2 Linear |R| = 8.423081e-06
      3 Linear |R| = 1.047782e-07
      4 Linear |R| = 8.054781e-09
      5 Linear |R| = 3.046073e-10
  Linear solve converged due to CONVERGED_RTOL iterations 5
 8 Nonlinear |R| = 4.432194e-04
jwpeterson commented 5 years ago

Hm... it may be that the problem is not difficult enough to justify the extra expense of computing the mortar contributions? So in this problem you would just use the Nodal approach, but maybe in more complicated applications it would only converge reliably using mortar?

Have you spent any time profiling where the extra time in the Mortar cases is spent? There might be some low-hanging fruit for optimization.

lindsayad commented 5 years ago

Do you still need to form mortar segments in order to compute the primal variable residual contributions correctly?

Yes I guess I don't see how to get those integrals done correctly without using mortar.

What is the resulting algebraic structure of the Jacobian matrix (e.g. is it still a saddle point problem)?

Yes, still saddle.

Do they not really address this issue in their papers?

No they don't do any a priori or a posteori error analysis. And the first paper was from Wohlmuth, so that's actually a little surprising to me because she seems like a math wiz.

Hm... it may be that the problem is not difficult enough to justify the extra expense of computing the mortar contributions? So in this problem you would just use the Nodal approach, but maybe in more complicated applications it would only converge reliably using mortar?

Have you spent any time profiling where the extra time in the Mortar cases is spent? There might be some low-hanging fruit for optimization.

I think that a robust non-linear solve could be a huge win for the more complicated contact problems in which I've seen this ping-ponging behavior time and again, even if per-iteration it's even 3x slower. I agree that that there could very well be some low hanging fruit. I have not done any profiling yet; I most certainly will at the time I start running larger problems.

lindsayad commented 4 years ago

@jwpeterson Just for fun I did a convergence study where I enforce continuity through a node-face (also known as a node-to-segment (NTS)) geometric discretization. It looks like the method is indeed convergent, but it's just a bit suboptimal. The orange is mortar with first order Lagrange basis for the primal and constant monomial for the Lagrange Multiplier; the blue is the node-face discretization with first order Lagrange for the primal (no need for explicit Lagrange Multiplier variable).

Screen Shot 2019-12-16 at 11 05 20 AM
jwpeterson commented 4 years ago

OK, interesting. I guess this is for the non-conforming mesh "solution continuity enforcement" case where there is no gap? You might also check the global H1 error since it's easy to do in MOOSE... I would expect it to be O(h) for the mortar and something sub-linear for NTS?

Also the mortar should automatically give you a convergent (possibly higher-order accurate) representation of the flux across the non-conforming interface. Not sure if there is anything analogous possible in NTS other than differentiating the FE solution which typically loses some accuracy... how important that is of course depends on the application.

lindsayad commented 4 years ago

OK, interesting. I guess this is for the non-conforming mesh "solution continuity enforcement" case where there is no gap?

Correct.

You might also check the global H1 error since it's easy to do in MOOSE... I would expect it to be O(h) for the mortar and something sub-linear for NTS?

Exactly what we see...

Screen Shot 2019-12-16 at 11 51 53 AM

Also the mortar should automatically give you a convergent (possibly higher-order accurate) representation of the flux across the non-conforming interface.

Yea this is what I've always seen with our mortar implementation.

Not sure if there is anything analogous possible in NTS other than differentiating the FE solution which typically loses some accuracy... how important that is of course depends on the application.

Yea I wouldn't know any other way to do it than doing that differentiation along the boundary.

lindsayad commented 4 years ago

A final plot that now includes a hybrid scheme in which the primal residuals are computed on mortar integrals but where the LM residual is determined pointwise on the slave nodes. It's interesting that the primal convergence rate is the exact same; however, the LM (e.g. flux) convergence rate is much higher for the pure mortar scheme, e.g. O(h^3/2) vs O(h^1/2) when using first order Lagrange basis for the LM variable. Figure and table summary below:

Screen Shot 2019-12-16 at 7 57 39 PM

Convergence summary

Scheme L2 primal H1 primal L2 LM
Mortar P1P1 2.0 1.0 1.5
Mortar P1P0 2.0 1.0 1.4
Hybrid 2.0 1.0 0.5
Node-face 1.7 .7 NA
jwpeterson commented 4 years ago

the LM residual is determined pointwise on the slave nodes.

So in the hybrid method, do you still have the choice of either P0 or P1 for the LM variable? Is the hybrid method easier to implement in higher dimensions?

lindsayad commented 4 years ago

So in the hybrid method, do you still have the choice of either P0 or P1 for the LM variable?

No you have to go with at least P1 for the LM var.

Is the hybrid method easier to implement in higher dimensions?

Nah, still have the build the mortar segments for assembling the primal residuals and Jacobians. But 3D mortar was determined to be a top priority for me this FY, so I'll be starting that shortly. If you have any references you think I should read for that, I'm all ears! I know it's going to be brutal...

lindsayad commented 5 months ago

IMO this is sufficiently done. We can create issues for sub-tasks as needed