ORNL-CEES / DataTransferKit

A library for multiphysics solution transfer. ARCHIVED
https://datatransferkit.readthedocs.io/en/dtk-3.0/
BSD 3-Clause "New" or "Revised" License
48 stars 26 forks source link

L2projection may lose accuracy for nonlinear transformations #90

Closed aprokop closed 7 years ago

aprokop commented 8 years ago

The code in question is in the file packages/Operators/src/SharedDomain/DTK_L2ProjectionOperator.cpp. During mass matrix assembly it does this:

        // Fill the mass matrix.
        range_cardinality = range_support_ids.size();
        for ( int ni = 0; ni < range_cardinality; ++ni )
        {
            mm_values.assign( range_cardinality, 0.0 );
            for ( int p = 0; p < num_ip; ++p )
            {
                temp =
                    range_entity_measure * int_weights[p] * shape_evals[p][ni];
                for ( int nj = 0; nj < range_cardinality; ++nj )
                {
                    mm_values[nj] += temp * shape_evals[p][nj];
                }
            }
            mass_matrix->insertGlobalValues( range_support_ids[ni],
                                             range_support_ids(),
                                             mm_values() );
        }

The question is: why does it not evaluate the Jacobian for every integration point and instead replaces it simply by multiplying by range_entity_measure?

Even if it is correct, it would be a good idea to have a unit test for quadratics confirming that it works as expected.

The plan:

sslattery commented 7 years ago

One thing I am thinking about here is that the transformations for HEX8 are still nonlinear by definition - they just happen to be first order.

I think we should first assess the difference between the code above and using the properly evaluated jacobian determinant for the transformation of the integrand. If that gives good results then we should move on to the second order functions.

aprokop commented 7 years ago

Agreed. Updated the checklist.

sslattery commented 7 years ago

This could possibly be a bug so I want to understand this for future users of the version 2 L2 projection. In version 3 this will not be an issue as we will have full access to discretization.

aprokop commented 7 years ago

OK, so I now have a standalone example to investigate that. At this moment, I have an integrateFieldIncorrect() variant that took the code from L2 projection mass matrix assembly. I now need to write a correct version of integration using Jacobians. @sslattery suggested using IntrepidCell for that, but at the moment I don't understand how to properly initialize it.

sslattery commented 7 years ago

Did you take a look at the IntrepidCell unit test?

aprokop commented 7 years ago

Apparently, "addresses #" is not one of the keywords that are picked up. Directly linking to #119.

dalg24 commented 7 years ago

If you meant Waffle.io to pick up that your PR address an existing issue, you need to include #<issue number> in the branch name. For instance you could have called your branch l2-#119.

Rombur commented 7 years ago

https://github.com/waffleio/waffle.io/wiki/FAQs#branch-moving

aprokop commented 7 years ago

Brief summary of the conversation with @sslattery, will fill in more details later:

We may fix this through evaluateGradient(). In Intrepid, the Jacobian entries are computed as \sum q_k basis_grad(k), where q_k are the node coordinates in the physical frame, and basis_grad(k) are the gradients of the basis functions. The sum is over all nodes/basis functions (cardinality).

In ShapeFunction we have an evaluateGradient() which gives an access to the basis_grads. Thus, we could fix the L2 projection by doing the following:

  1. Get support IDs of the cell
  2. Get physical coordinates of the support IDs
  3. Evaluate gradients at reference integration point
  4. Compute the Jacobian that way by summing though nodes and basis grads
  5. Compute the determinant of the Jacobian

The main assumption that this would rely on is that support dofs and nodes are the same, which should be true for regular FEM. The CVFEM is less clear at this point.

The plan:

aprokop commented 7 years ago

Few questions regarding designing a common Jacobian/determinant interface:

The jacobian and jacobian_determinant are going to be inside ad Jacobian(?) class.

The question remains of what to initialize the Jacobian class with? Alternatives:

sslattery commented 7 years ago
sslattery commented 7 years ago

Addressed partially by #119

aprokop commented 7 years ago

Running @sslattery's example (a 3D exodus example with a ball embedded into a cube) produced no difference between old and new L2 projection approach.

Digging further, for hexahedrons due to the integration weights being the same the difference should come from the difference `abs(entitymeasure - \sum{integration_points ip} |J(x_ip)| w_ip$. These differences in the provided mesh were on the order 1e-16 (except for cell 0 where it was 1.3e-1).

Trying to figure out what's going on I went back to the tstIntegral test in #119. There the differences for perturbed meshes were on the order 1e-3, 1e-4. Looking further, that may be a red herring. Specifically, we have two different integration orders in place: one for integration of a function, and one for mesh geometry and things like calculating entity measure.

The ReferenceHexMesh internally uses IntrepidCellLocalMap which for a reason I don't understand hardcodes a cell order to 1 (see, for instance, line 59 in IntrepidCellLocalMap.cpp). Thus, if I use a different integration order in the test (I was using 3), that would make the difference between two. The reason is that the test integration order is being explicitly passed to the ReferenceHex's Integration Rule which calls Intrepid's integration rule with that order.

So, I think the described problem hides the real problem. Theoretically, for a hexahedron as long as all integration weights are the same there should be no difference between old and new method for f(x) = 1. We should only start seeing a difference for non-constant functions.

After changing the integration order to 1 in the test, I got the following results:

perturbation DTK old DTK new Intrepid
0.00 7.390e-13 7.390e-13 2.062e+04
0.10 9.090e-04 9.090e-04 2.073e+04
0.20 7.272e-03 7.272e-03 2.072e+04
0.30 2.454e-02 2.454e-02 2.071e+04
0.40 5.818e-02 5.818e-02 2.059e+04

As expected, old and new DTK integration behave the same for f(x) = 1 when the integration order matches the underlying cell's integration order. It is unclear to me why the Intrepid integration misbehaves at this point.

aprokop commented 7 years ago

After changing the integration order to 1 in the test, I got the following results: ... It is unclear to me why the Intrepid integration misbehaves at this point.

Looking further into that it seems like a bug (in Intrepid or somewhere else) when using first order integration on ReferenceHexMesh. The integral for each of the first 64 cells are 8 times larger than they are supposed to be, and after 64 cells they are all kind of messed up. The 2nd integration order works fine.

Not going to pursue this further at the moment. Next line of thinking is to switch hardcoded 1 to 2 in the ReferenceHexLocalMap and switch integration order 3 to 2 in the tstIntegral. The expected results: dtk old and new are the same for f(x)=1 even for distorted meshes, but dtk new is better for non-constant f.

aprokop commented 7 years ago

The results are kind of what we expect (if we ignore the Intrepid column, don't know what's going on there yet):

f(x) = 1

perturbation DTK old DTK new Intrepid
0.00 1.327e-11 1.327e-11 7.674e-13
0.10 2.842e-14 1.137e-13 1.990e-13
0.20 1.705e-13 3.979e-13 2.842e-14
0.30 5.400e-13 2.842e-14 5.684e-14
0.40 2.842e-14 4.263e-13 1.137e-13

f(x) = 9.3 x + 2.2 y + 3.4 z + 1.0

perturbation DTK old DTK new Intrepid
0.00 2.728e-12 2.728e-12 4.547e-12
0.10 4.294e-02 1.819e-12 7.349e-02
0.20 1.723e-01 1.000e-11 2.843e-01
0.30 3.868e-01 1.182e-11 6.110e-01
0.40 6.822e-01 7.276e-12 1.023e+00
aprokop commented 7 years ago

I think I fixed something in #127. Now, using integration_order = 1 gives exactly same integrals for DTK old and DTK new for all functions, as expected, as the measure and function integral are evaluated using same cubature rules.

aprokop commented 7 years ago

DTK old: compute entity measure using hardcoded cubature (order=1), integrate using measure DTK trs: compute entity measure using function cubature, integrate using measure DTK new: integrate using Jacobians

Summary:

  1. f(x) = 1 => DTK trs and DTK new produce similar results as expected (they both compute measure using the same cubature)
  2. For f(x) != 1, DTK old error > DTK trs error > DTK new error
  3. Using integration order = 1 produces the same error for all integrals, as the cubature rules coincide.

The difference between DTK new and DTK trs is the one we are interested in. The difference between DTK trs and DTK old is caused by the implementation of HexReferenceMesh and IntrepidCell.

f(x) = 1

integration order = 1

pert DTK old DTK trs DTK new
0.00 7.390e-13 7.390e-13 7.390e-13
0.10 9.090e-04 9.090e-04 9.090e-04
0.20 7.272e-03 7.272e-03 7.272e-03
0.30 2.454e-02 2.454e-02 2.454e-02
0.40 5.818e-02 5.818e-02 5.818e-02

integration order = 2

pert DTK old DTK trs DTK new
0.00 1.327e-11 1.327e-11 1.327e-11
0.10 9.090e-04 2.842e-14 1.137e-13
0.20 7.272e-03 1.705e-13 3.979e-13
0.30 2.454e-02 5.400e-13 2.842e-14
0.40 5.818e-02 2.842e-14 4.263e-13

f(x) = 9.3 x + 2.2 y + 3.4 z + 1

integration order = 1

pert DTK old DTK trs DTK new
0.00 2.728e-12 2.728e-12 2.728e-12
0.10 2.234e-03 2.234e-03 2.234e-03
0.20 1.280e-01 1.280e-01 1.280e-01
0.30 5.582e-01 5.582e-01 5.582e-01
0.40 1.477e+00 1.477e+00 1.477e+00

integration order = 2

pert DTK old DTK trs DTK new
0.00 3.638e-12 2.728e-12 2.728e-12
0.10 2.234e-03 4.294e-02 1.819e-12
0.20 1.280e-01 1.723e-01 1.000e-11
0.30 5.582e-01 3.868e-01 1.182e-11
0.40 1.477e+00 6.822e-01 7.276e-12

f(x) = x y

integration order = 1

pert DTK old DTK trs DTK new
0.00 1.137e-13 1.137e-13 1.137e-13
0.10 3.687e-03 3.687e-03 3.687e-03
0.20 6.184e-04 6.184e-04 6.184e-04
0.30 3.077e-02 3.077e-02 3.077e-02
0.40 1.125e-01 1.125e-01 1.125e-01

integration order = 2

pert DTK old DTK trs DTK new
0.00 5.684e-13 2.274e-13 2.274e-13
0.10 1.382e-03 4.011e-04 8.189e-07
0.20 3.689e-03 3.126e-03 1.361e-05
0.30 3.658e-02 2.848e-03 7.149e-05
0.40 1.192e-01 6.406e-03 2.341e-04
sslattery commented 7 years ago

for f(x) = xy would integration order 3 be the right one to use?

aprokop commented 7 years ago

I tried both 2 and 3 integration orders, and they produced the exact same results. I think that when you specify integration order 2 the produced cubature is accurate up to 2*2-1=3 order.

sslattery commented 7 years ago

OK - so its not very accurate in that case then

dalg24 commented 7 years ago

No, finite elements just won't capture (x, y) -> f(x, y) = x y if you use tensor product of polynomials as shape function basis.

aprokop commented 7 years ago

Remaining tasks:

aprokop commented 7 years ago

I see no difference between the original and fixed versions of L2 projections when running STK interpolation example with perturbed nodes. The examples reads in a [-5,5]^3 mesh (provided by @sslattery), and randomly perturbs the nodes (in the same manner in both source and target mesh). I think the perturbation is done correctly as the values change after some later (unchanged) code is filling in a field.

I have no understanding of why nothing changes. The problem should have been similar to the standalone Integral example. The STK example uses 2nd order for function integration so the results should have been similar to the 4th table in comment https://github.com/ORNL-CEES/DataTransferKit/issues/90#issuecomment-258515643.

I see few things:

The last one should not matter much as the function in question is close to linear f(x) = |x| + |y| + |z| + 1. The first two may matter.

sslattery commented 7 years ago

It might be because the mesh is the same

aprokop commented 7 years ago

Same meshes do not seem to explain the problem. The meshes perturbed in different ways (seeded with different srand value, do not perturb the boundary) result in similar errors in both old and new approaches.

aprokop commented 7 years ago

Here are the updated results from the integral test when also integrating a field generated by a test function (fDTK means integrating a field rather than an exact function). The main difference is significant increase in the approximation error for a nonlinear field as now the interpolation error comes into play.

One thing I'm still not sure about is why DTK trs and fDTK trs behave very similar for distorted field and linear function.

Overall, the results are as expected.

f(x) = 1

integration order = 1

pert DTK old DTK trs DTK new fDTK old fDTK trs fDTK new
0.00 7.390e-13 7.390e-13 7.390e-13 7.390e-13 7.390e-13 7.390e-13
0.10 9.090e-04 9.090e-04 9.090e-04 9.090e-04 9.090e-04 9.090e-04
0.20 7.272e-03 7.272e-03 7.272e-03 7.272e-03 7.272e-03 7.272e-03
0.30 2.454e-02 2.454e-02 2.454e-02 2.454e-02 2.454e-02 2.454e-02
0.40 5.818e-02 5.818e-02 5.818e-02 5.818e-02 5.818e-02 5.818e-02

integration order = 2

pert DTK old DTK trs DTK new fDTK old fDTK trs fDTK new
0.00 1.327e-11 1.327e-11 1.327e-11 1.327e-11 1.327e-11 1.327e-11
0.10 9.090e-04 2.842e-14 1.137e-13 9.090e-04 2.842e-14 1.137e-13
0.20 7.272e-03 1.705e-13 3.979e-13 7.272e-03 1.705e-13 3.695e-13
0.30 2.454e-02 5.400e-13 2.842e-14 2.454e-02 5.400e-13 2.842e-14
0.40 5.818e-02 2.842e-14 4.263e-13 5.818e-02 2.842e-14 4.263e-13

f(x) = 9.3 x + 2.2 y + 3.4 z + 1

integration order = 1

pert DTK old DTK trs DTK new fDTK old fDTK trs fDTK new
0.00 2.728e-12 2.728e-12 2.728e-12 2.728e-12 2.728e-12 2.728e-12
0.10 2.234e-03 2.234e-03 2.234e-03 2.234e-03 2.234e-03 2.234e-03
0.20 1.280e-01 1.280e-01 1.280e-01 1.280e-01 1.280e-01 1.280e-01
0.30 5.582e-01 5.582e-01 5.582e-01 5.582e-01 5.582e-01 5.582e-01
0.40 1.477e+00 1.477e+00 1.477e+00 1.477e+00 1.477e+00 1.477e+00

integration order = 2

pert DTK old DTK trs DTK new fDTK old fDTK trs fDTK new
0.00 3.638e-12 2.728e-12 2.728e-12 2.728e-12 2.728e-12 2.728e-12
0.10 2.234e-03 4.294e-02 1.819e-12 2.234e-03 4.294e-02 1.819e-12
0.20 1.280e-01 1.723e-01 1.000e-11 1.280e-01 1.723e-01 1.000e-11
0.30 5.582e-01 3.868e-01 1.182e-11 5.582e-01 3.868e-01 1.091e-11
0.40 1.477e+00 6.822e-01 7.276e-12 1.477e+00 6.822e-01 7.276e-12

f(x) = x y

integration order = 1

pert DTK old DTK trs DTK new fDTK old fDTK trs fDTK new
0.00 1.137e-13 1.137e-13 1.137e-13 1.137e-13 1.137e-13 1.137e-13
0.10 3.687e-03 3.687e-03 3.687e-03 3.284e-03 3.284e-03 3.284e-03
0.20 6.184e-04 6.184e-04 6.184e-04 1.273e-02 1.273e-02 1.273e-02
0.30 3.077e-02 3.077e-02 3.077e-02 4.972e-02 4.972e-02 4.972e-02
0.40 1.125e-01 1.125e-01 1.125e-01 1.364e-01 1.364e-01 1.364e-01

integration order = 2

pert DTK old DTK trs DTK new fDTK old fDTK trs fDTK new
0.00 5.684e-13 2.274e-13 2.274e-13 5.684e-13 3.411e-13 2.274e-13
0.10 1.382e-03 4.011e-04 8.189e-07 3.284e-03 4.258e-03 4.950e-03
0.20 3.689e-03 3.126e-03 1.361e-05 1.273e-02 5.891e-03 1.022e-02
0.30 3.658e-02 2.848e-03 7.149e-05 4.972e-02 1.031e-02 1.597e-02
0.40 1.192e-01 6.406e-03 2.341e-04 1.364e-01 2.394e-02 2.271e-02
aprokop commented 7 years ago

What is an error of integrating a piecewise function over an interval? Does this error go down with higher order cubatures? For example, integrating f(x) = |x| on [-1,1] using Guassian quadrature of increasing order results in this:

ans =  1
ans =  0.15470
ans =  0.13934
ans =  0.042535
ans =  0.055150
ans =  0.019894
ans =  0.029461
ans =  0.011528
ans =  0.018310
ans =  0.0075217
ans =  0.012477
ans =  0.0052944
ans =  0.0090461
ans =  0.0039287
ans =  0.0068585
ans =  0.0030310
ans =  0.0053784
ans =  0.0024095
ans =  0.0043306
ans =  0.0019613

So, the error does not consistently go down. I would assume that it is actually 0-th order accurate. The situation is going to be even worse for a function with more points, like f(x) = | 1 - 2|x||

ans =  1
ans =  0.69060
ans =  0.49910
ans =  0.080086
ans =  0.027479
ans =  0.018218
ans =  0.064605
ans =  0.058439
ans =  0.063010
ans =  0.011352
ans =  0.0054172
ans =  0.0044476
ans =  0.020585
ans =  0.020227
ans =  0.023419
ans =  0.0042835
ans =  0.0022271
ans =  0.0019561
ans =  0.0099887
ans =  0.010158

I think this is what we actually encounter when doing the assembly matrix integration. Unless we use common grid refinement, we can't resolve the error, and there is no point in going beyond 1st order accurate cubatures anyway.

I wonder if that's why we can't see the difference when meshes don't match. If the meshes match, we should be able to see the difference when using high order functions, as the piece-wise error component of the error is going to be missing.

aprokop commented 7 years ago

It seems that the hypothesis was correct. Below are the results for a piece-wise smooth function (inflection is in the middle of a subdomain). We see that in case of discontinuities using Jacobians is no more accurate than doing a simple approximation (at least for lower-order methods). Just look at integration order=2, perturbation != 0, DTK old vs DTK new.

f(x) = |x - 1.55|

integration order = 1

pert DTK old DTK trs DTK new fDTK old fDTK trs fDTK new
0.00 8.527e-14 8.527e-14 8.527e-14 5.684e-14 5.684e-14 5.684e-14
0.10 9.116e-03 9.116e-03 9.116e-03 3.724e-01 3.724e-01 3.724e-01
0.20 3.902e-02 3.902e-02 3.902e-02 7.432e-01 7.432e-01 7.432e-01
0.30 9.357e-02 9.357e-02 9.357e-02 1.108e+00 1.108e+00 1.108e+00
0.40 1.767e-01 1.767e-01 1.767e-01 1.465e+00 1.465e+00 1.465e+00

integration order = 2

pert DTK old DTK trs DTK new fDTK old fDTK trs fDTK new
0.00 4.263e-13 4.121e-13 4.121e-13 4.263e-13 4.263e-13 4.263e-13
0.10 9.116e-03 9.600e-03 1.150e-02 3.724e-01 3.720e-01 3.747e-01
0.20 3.902e-02 3.824e-02 4.575e-02 7.432e-01 7.443e-01 7.546e-01
0.30 9.280e-02 8.487e-02 1.017e-01 1.108e+00 1.117e+00 1.140e+00
0.40 1.500e-01 1.248e-01 1.554e-01 1.465e+00 1.492e+00 1.531e+00

There is still a question of how exactly the results would depend on the discontinuity. The assumption is that with decreasing of the gradient difference the error would go down. But that starts to really depend on the gradients in the analytical solution of PDEs.

aprokop commented 7 years ago

The plan now it to clean up the branch and commit everything but the change to the L2 projection method. So we will still use the old method as we cannot show any improvements for it in most situations, and as the new method has the drawbacks as it make some assumptions regarding the support ids (their order and other things).

sslattery commented 7 years ago

I think that is a good plan

aprokop commented 7 years ago

@sslattery Should I commit a "fixed" version of L2projection hidden behind #ifdef's ? This way we can go back to this issue in the future without trying to resurrect the code. I can put a long comment there describing the issues.

sslattery commented 7 years ago

I think i would rather have a clean version committed. You can let your branch persist to keep to fixes for the full l2 projection.

aprokop commented 7 years ago

I would not want to leave it my own branch. I think I can instead create an dtk l2 branch in the main repo so that it does not disappear accidentally.

aprokop commented 7 years ago

Here is the L2Projection fix that we don't want to push to master just yet. The patches including some modifications to the STK example to drive it are at the end.

From 9f1a3adbc61e169942414ae2351bec597c8c16f2 Mon Sep 17 00:00:00 2001
From: Andrey Prokopenko <prokopenkoav@ornl.gov>
Date: Fri, 11 Nov 2016 12:09:32 -0500
Subject: [PATCH 1/2] L2Projection: added a commented out version of
 integration fix

In github issue #90 we tried to investigate the importance of not using
the Jacobian when computing mass and assembly matrices. Instead of
Jacobians it uses range_entity_measure.

Our assumption was that it is wrong to use this approach when the
transformation from a reference to physical is exactly nonlinear.

For more discussions see Issue #90, and PR #119 and #127.

The tests demonstrated that but only for the case when integrated
function is smooth on each element. For non-smooth tests the results
show no distinction. Thus, we don't enable this by default.
---
 .../src/SharedDomain/DTK_L2ProjectionOperator.cpp  | 70 ++++++++++++++++++----
 1 file changed, 59 insertions(+), 11 deletions(-)

diff --git a/packages/Operators/src/SharedDomain/DTK_L2ProjectionOperator.cpp b/packages/Operators/src/SharedDomain/DTK_L2ProjectionOperator.cpp
index b6c0398..8ed221d 100644
--- a/packages/Operators/src/SharedDomain/DTK_L2ProjectionOperator.cpp
+++ b/packages/Operators/src/SharedDomain/DTK_L2ProjectionOperator.cpp
@@ -41,13 +41,6 @@
 #include <algorithm>
 #include <map>

-#include "DTK_BasicEntityPredicates.hpp"
-#include "DTK_DBC.hpp"
-#include "DTK_IntegrationPoint.hpp"
-#include "DTK_L2ProjectionOperator.hpp"
-#include "DTK_ParallelSearch.hpp"
-#include "DTK_PredicateComposition.hpp"
-
 #include <Teuchos_OrdinalTraits.hpp>

 #include <Teuchos_XMLParameterListCoreHelpers.hpp>
@@ -62,6 +55,48 @@

 #include <Stratimikos_DefaultLinearSolverBuilder.hpp>

+// In github issue #90 we tried to investigate the importance of not using the
+// Jacobian when computing mass and assembly matrices.
+// The code in question is the following:
+//   range_cardinality = range_support_ids.size();
+//   for ( int ni = 0; ni < range_cardinality; ++ni )
+//   {
+//       mm_values.assign( range_cardinality, 0.0 );
+//       for ( int p = 0; p < num_ip; ++p )
+//       {
+//           temp =
+//               range_entity_measure * int_weights[p] * shape_evals[p][ni];
+//           for ( int nj = 0; nj < range_cardinality; ++nj )
+//           {
+//               mm_values[nj] += temp * shape_evals[p][nj];
+//           }
+//       }
+//       mass_matrix->insertGlobalValues( range_support_ids[ni],
+//                                        range_support_ids(),
+//                                        mm_values() );
+//   }
+// So, instead of Jacobians it uses range_entity_measure.
+//
+// Our assumption was that it is wrong to use this approach when the
+// transformation from a reference to physical is exactly nonlinear.
+//
+// For more discussions see Issue #90, and PR #119 and #127.
+//
+// The tests demonstrated that but only for the case when integrated function
+// is smooth on each element. For non-smooth tests the results show no
+// distinction. Thus, we don't enable this by default.
+// #define FIX_INTEGRATION
+
+#include "DTK_BasicEntityPredicates.hpp"
+#include "DTK_DBC.hpp"
+#include "DTK_IntegrationPoint.hpp"
+#ifdef FIX_INTEGRATION
+#include "DTK_Jacobian.hpp"
+#endif
+#include "DTK_L2ProjectionOperator.hpp"
+#include "DTK_ParallelSearch.hpp"
+#include "DTK_PredicateComposition.hpp"
 namespace DataTransferKit
 {
 //---------------------------------------------------------------------------//
@@ -248,6 +283,10 @@ void L2ProjectionOperator::assembleMassMatrix(
     int num_ip = 0;
     double temp = 0;

+#ifdef FIX_INTEGRATION
+    DataTransferKit::Jacobian J( range_space );
+#endif
+
     // Assemble the mass matrix and integration point set.
     EntityIterator range_it;
     EntityIterator range_begin = range_iterator.begin();
@@ -264,6 +303,16 @@ void L2ProjectionOperator::assembleMassMatrix(
         range_integration_rule->getIntegrationRule( *range_it, d_int_order,
                                                     int_points, int_weights );

+        for ( int p = 0; p < num_ip; ++p )
+        {
+#ifndef FIX_INTEGRATION
+            int_weights[p] *= range_entity_measure;
+#else
+            int_weights[p] *=
+                J.jacobian_determinant( *range_it, int_points[p]() );
+#endif
+        }
+
         // Create new integration points.
         num_ip = int_weights.size();
         shape_evals.resize( num_ip );
@@ -295,8 +344,8 @@ void L2ProjectionOperator::assembleMassMatrix(
             mm_values.assign( range_cardinality, 0.0 );
             for ( int p = 0; p < num_ip; ++p )
             {
-                temp =
-                    range_entity_measure * int_weights[p] * shape_evals[p][ni];
+                temp = int_weights[p] * shape_evals[p][ni];
+
                 for ( int nj = 0; nj < range_cardinality; ++nj )
                 {
                     mm_values[nj] += temp * shape_evals[p][nj];
@@ -376,8 +425,7 @@ void L2ProjectionOperator::assembleCouplingMatrix(

             // Add the ip weights times measures scaled by the number of
             // domains in which ip was found.
-            export_measures_weights.push_back( current_ip.d_owner_measure *
-                                               current_ip.d_integration_weight /
+            export_measures_weights.push_back( current_ip.d_integration_weight /
                                                domain_ids.size() );

             // Add the ip shape function evals and support ids with padding.
-- 
2.7.4

0001-L2Projection-added-a-commented-out-version-of-integr.txt 0002-STK-L2projection-test.txt

sslattery commented 7 years ago

Closing with merge of PR #127