precice / dealii-adapter

A coupled structural solver written with the C++ finite element library deal.II
GNU Lesser General Public License v3.0
19 stars 12 forks source link

Replace global vector for interface values by boundary vector #34

Closed davidscn closed 3 years ago

davidscn commented 4 years ago

The coupling values obtained from preCICE are currently stored in a global DoF vector, although we just have an entry for all coupling DoFs, i.e. most of the entries are wastefully allocated but unused. The reason was initially the utilization of get_function_values() during the assembly operation, which can easily return local DoF values from a global solution vector. I just stumbled across another variant of this function, which allows to specify the indices of the respective vector from which we want to get the local function values: https://www.dealii.org/developer/doxygen/deal.II/classFEValuesBase.html#a378b552c1b8ca9f23f89677910e7a5cd

The other variant might be a much more efficient solution, but the changes require some changes in the system assembly and the Adapter class.

davidscn commented 3 years ago

The given function doesn't help. Similarly to the shown function, the local evaluation in combination with an FEValuesExtractor would be a possible solution:

const FEValuesExtractors::Vector     stress(0);
cell->get_dof_indices(local_dof_indices);
adapter.get_dof_values(dof_values,local_dof_indices);
fe_face_values[coupling_contribution]
         .get_function_values_from_local_dof_values(dof_values, quad_values);

Here, only the usual precice std::vector is stored and no global vector would be required. But this approach still requires some handling of indices.

A better approach would be to operate directly on quadrature points instead of support points, as it is already done in some other coupled FE programs. Initially, the motivation for nodal support points was to generate an (according to the polynomial degree) arbitrary high interpolation at quadrature points using high polynomial degrees. Since the mapping boils down the convergence order to (at best) two, we cannot really hope for a higher convergence order with the coupling data (using higher order polynomials might make sense though). Hence, we can use precice in order to generate interpolated values directly on quadrature points. I'm not sure, whether the mapping on quad points preserves the conservation property in case of conservative mappings, since the interpolation might be inconsistent with respect to the polynomials. But with consistent mappings, as used throughout our coupled codes, it would be no problem. Advantages using quad points as opposed to the current implementation would be:

It should be noted that this is only relevant for Neumann BCs. For Dirichlet BC, data is usually required at the nodal support points. We have, however, currently only examples with NBC in this repository. Only the code-gallery example uses DRB.

Another (very cool) advantage would be the decoupling between the number of unknowns and the coupling mesh. It would be possible to increase the number of quadrature points at the coupling interface so that a higher sampling at the interface could compensate the loss of convergence order obtained due to high polynomial degrees.

uekerman commented 3 years ago

In principle the idea to read directly at the Gauss points is good. We use the same in Nutils here. Good catch with the conservation property. I never thought about this, but I think you are right.

I don't yet understand the difference between Neumann and Dirichlet boundary conditions here. Why does reading at Gauss points not work for Dirichlet BCs? If this is really the case I am bit reluctant. We then need to distinguish two further cases, which makes the adapter code more complicated (thus harder to read and harder to maintain), correct?

A completely different story are the write locations. Do you suggest to use the Gauss points here as well? For higher-order basis functions this can lead to ill-conditioned RBF problems (ftp://ftp.informatik.uni-stuttgart.de/pub/library/ncstrl.ustuttgart_fi/INPROC-2017-35/INPROC-2017-35.pdf). Writing at the nodal points (potentially sub-sampled) should be the better option.

davidscn commented 3 years ago

I don't yet understand the difference between Neumann and Dirichlet boundary conditions here. Why does reading at Gauss points not work for Dirichlet BCs? If this is really the case I am bit reluctant. We then need to distinguish two further cases, which makes the adapter code more complicated (thus harder to read and harder to maintain), correct?

To be investigated. We don't yet have any relevant case with Dirichlet BCs. I think there is in any case an opportunity to work with Gauss points and DBC.

A completely different story are the write locations. Do you suggest to use the Gauss points here as well? For higher-order basis functions this can lead to ill-conditioned RBF problems (ftp://ftp.informatik.uni-stuttgart.de/pub/library/ncstrl.ustuttgart_fi/INPROC-2017-35/INPROC-2017-35.pdf). Writing at the nodal points (potentially sub-sampled) should be the better option.

Does the ill-conditioned RBF problem depend on the mapping direction? If I understand your comment correctly, writing from Gauss points is a problem, reading not? In general, the nodal support points in deal.II are distributed according to Gauss-Lobatto quadrature points. Thus, the problem would already occur in the current setup (?).

To answer your question: yes, I would propose to use the same mesh for reading and writing, The primary reasons are the more user-friendly configuration due to a single mesh instead of separated meshes for reading and writing and the consequently reduced data needed by preCICE. If we ignore this points, it would be possible to write from equidistant quadrature points using the same adapter infrastructure. deal.II allows (more or less) to use any desired quadrature formula in every cell independently. Hence, we could simply perform a sweep over all interface cells including sub-sampling as desired. I agree that we should not introduce/support two different methodologies: reading/writing from DoFs yields in a completely different data access than using quad points.

uekerman commented 3 years ago

Does the ill-conditioned RBF problem depend on the mapping direction? If I understand your comment correctly, writing from Gauss points is a problem, reading not?

Pretty much. It is a problem if you need to create an interpolant based on Gauss points. So for a consistent mapping it is the "from" mesh.

In general, the nodal support points in deal.II are distributed according to Gauss-Lobatto quadrature points. Thus, the problem would already occur in the current setup (?).

If we switch to Gauss points, yes. Of course, the higher the order the worse for the RBF interpolant. For "low-order" Gauss-Lobatto, using the Gauss points is still OK.

davidscn commented 3 years ago

If we switch to Gauss points, yes. Of course, the higher the order the worse for the RBF interpolant. For "low-order" Gauss-Lobatto, using the Gauss points is still OK.

We need to distinguish the Gauss(-Legendre) quadrature points from Gauss-Lobatto quadrature points, maybe I was not clear enough. In the current codes, the DoFs live on the Gauss-Lobatto quadrature points in order to preserve a reasonable conditioning of the system matrices (in particular the mass-matrix) for high-order polynomials, it has nothing to-do with the applied quadrature formula. For integration, the DoFs are interpolated to the Gauss-Legendre quadrature points. So, the question remains: do we have already conditioning problems with our current codes? I think in reality, we will not exceed a polynomial degree of p~5.

davidscn commented 3 years ago

Has been addressed in #35