precice / fenicsx-adapter

Experimental preCICE-adapter for the open source computing platform FEniCSx
GNU Lesser General Public License v3.0
10 stars 4 forks source link

Update adapter and partitioned-heat-conduction #1

Open BenjaminRodenberg opened 2 years ago

BenjaminRodenberg commented 2 years ago

In this PR we want to update the adapter and the partitioned heat conduction tutorial to FEniCSx.

Helpful: https://jorgensd.github.io/dolfinx-tutorial/chapter2/heat_equation.html

BenjaminRodenberg commented 2 years ago

I removed the parallelization, the support for PointSources and for nearest projection from this PR.

I also fixed the tests that are run via python3 setup.py test.

As a next step, we should get the partitioned heat equation running. Ideally: Implement more tests, if the partitioned heat equation is still failing.

arvedes commented 2 years ago

I get the following error in the Neumann participant:

Traceback (most recent call last):
  File "/home/workdir/tutorials/partitioned-heat-conduction/fenicsx/heat.py", line 128, in <module>
    precice_dt = precice.initialize(coupling_boundary, read_function_space=W, write_object=u_D_function)
  File "/home/fenicsx-adapter/fenicsxprecice/fenicsxprecice.py", line 305, in initialize
    self.write_data(write_function)
  File "/home/fenicsx-adapter/fenicsxprecice/fenicsxprecice.py", line 191, in write_data
    self._interface.write_block_scalar_data(write_data_id, self._precice_vertex_ids, write_data)
  File "cyprecice/cyprecice.pyx", line 1012, in cyprecice.Interface.write_block_scalar_data
ValueError: Buffer has wrong number of dimensions (expected 1, got 2)

@BenjaminRodenberg do you have an idea what's the reason that?

Furthermore, the coupling expression is not supported in the Dirichlet participant in DirichletBC, it raises a NotImplementedError because of a wrong dtype.

BenjaminRodenberg commented 2 years ago

I get the following error in the Neumann participant:

Traceback (most recent call last):
  File "/home/workdir/tutorials/partitioned-heat-conduction/fenicsx/heat.py", line 128, in <module>
    precice_dt = precice.initialize(coupling_boundary, read_function_space=W, write_object=u_D_function)
  File "/home/fenicsx-adapter/fenicsxprecice/fenicsxprecice.py", line 305, in initialize
    self.write_data(write_function)
  File "/home/fenicsx-adapter/fenicsxprecice/fenicsxprecice.py", line 191, in write_data
    self._interface.write_block_scalar_data(write_data_id, self._precice_vertex_ids, write_data)
  File "cyprecice/cyprecice.pyx", line 1012, in cyprecice.Interface.write_block_scalar_data
ValueError: Buffer has wrong number of dimensions (expected 1, got 2)

@BenjaminRodenberg do you have an idea what's the reason that?

Looks like an n x d array, where only a n array should be provided. Can you debug the write_data that the adapter provides to the bindings? If you are using the Neumann participant, it should write Temperatures (scalar data). Note that for the Dirichlet participant the situation is not so clear: If you write the normal-direction flux then scalar data is fine, if you write the vector valued flux, you need vector data.

Furthermore, the coupling expression is not supported in the Dirichlet participant in DirichletBC, it raises a NotImplementedError because of a wrong dtype.

I understand that this means that there is a significant difference how a DirichletBC is defined in dolfinx and in FEniCS. This can mean that we have to use an entirely different strategy here. Unfortunately, I do not really know enough about boundary conditions in dolfinx to come up with a good idea.

In the original fenics-adapter the coupling expression is used to create an interpolant from the data provided by preCICE (read_block_scalar_data etc.). See here. We use this strategy, because a FEniCS DirichletBC expects an (analytical) expression and not data.

If dolfinx' DirichletBC does not expect an (analytical) expression, but data, this might help us to entirely remove the complicated interpolation in the adapter. However, I am not sure how this works for higher order function spaces. A useful resource might be our documentation on FEM meshes. It might also help to look into the https://github.com/precice/dealii-adapter.

arvedes commented 2 years ago

I updated the adapter and the example, the output looks good to me. However, the error computation is still missing. The next steps are:

Note that the latest dolfinx version is required (I'm using the nemocrys/dolfinx-openfoam container based on dolfinx commit 358470227b).

arvedes commented 2 years ago

@BenjaminRodenberg what do you think about the current state? Can you approve that the results are correct?

BenjaminRodenberg commented 2 years ago

@BenjaminRodenberg what do you think about the current state? Can you approve that the results are correct?

@arvedes: I did a very quick review - just from the user perspective:

Unit tests

I tried to run the tests, I see that most test are failing:

fenicsx-adapter$ python3 setup.py test
running test
WARNING: Testing via this command is deprecated and will be removed in a future version. Users looking for a generic test entry point independent of test runner are encouraged to use tox.
running egg_info
writing fenicsxprecice.egg-info/PKG-INFO
writing dependency_links to fenicsxprecice.egg-info/dependency_links.txt
writing requirements to fenicsxprecice.egg-info/requires.txt
writing top-level names to fenicsxprecice.egg-info/top_level.txt
reading manifest file 'fenicsxprecice.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'fenicsxprecice.egg-info/SOURCES.txt'
running build_ext
test_scalar_read (tests.integration.test_write_read.TestWriteandReadData)
Test to check if Adapter function read() passes correct parameters to the API function read_block_scalar_data() ... ERROR
test_scalar_write (tests.integration.test_write_read.TestWriteandReadData)
Test to check if Adapter function write() passes correct parameters to the API function write_block_scalar_data() ... ERROR
test_vector_read (tests.integration.test_write_read.TestWriteandReadData)
Test to check if Adapter function read() passes correct parameters to the API function read_block_vector_data() ... ERROR
test_vector_write (tests.integration.test_write_read.TestWriteandReadData)
Test to check if Adapter function write() passes correct parameters to the API function write_block_vector_data() ... ERROR
test_version (tests.integration.test_fenicsxprecice.TestAdapter)
Test that adapter provides a version ... ok
test_checkpoint_mechanism (tests.integration.test_fenicsxprecice.TestCheckpointing)
Test correct checkpoint storing ... ERROR
test_update_expression_scalar (tests.integration.test_fenicsxprecice.TestExpressionHandling)
Check if a sampling of points on a dolfinx Function interpolated via FEniCSx is matching with the sampling of the ... ERROR
test_update_expression_vector (tests.integration.test_fenicsxprecice.TestExpressionHandling)
Check if a sampling of points on a dolfinx Function interpolated via FEniCSx is matching with the sampling of the ... ERROR
test_convert_scalar_fenicsx_to_precice (tests.unit.test_adapter_core.TestAdapterCore)
Test conversion from function to write_data for scalar ... ERROR
test_convert_vector_fenicsx_to_precice (tests.unit.test_adapter_core.TestAdapterCore)
Test conversion from function to write_data for vector ... ERROR

...

======================================================================
ERROR: test_convert_vector_fenicsx_to_precice (tests.unit.test_adapter_core.TestAdapterCore)
Test conversion from function to write_data for vector
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/benjamin/Programming/fenicsx-adapter/tests/unit/test_adapter_core.py", line 76, in test_convert_vector_fenicsx_to_precice
    data = convert_fenicsx_to_precice(fenicsx_function, local_ids)
  File "/home/benjamin/Programming/fenicsx-adapter/fenicsxprecice/adapter_core.py", line 126, in convert_fenicsx_to_precice
    if fenicsx_function.function_space.num_sub_spaces > 0:  # function space is VectorFunctionSpace
TypeError: '>' not supported between instances of 'method' and 'int'

----------------------------------------------------------------------
Ran 10 tests in 1.218s

FAILED (errors=9)
Test failed: <unittest.runner.TextTestResult run=10 errors=9 failures=0>
error: Test failed: <unittest.runner.TextTestResult run=10 errors=9 failures=0>

I guess the reason is that we did not really bother with the tests (yet). I think we should fix this before merging this PR. The testing might be a bit confusing, if you need help here, I'm available.

Tutorial

I also tried to run the partitioned heat equation, but got an error:

fenicsx-adapter/tutorials/partitioned-heat-conduction/fenicsx$ ./run.sh -d
Traceback (most recent call last):
  File "heat.py", line 37, in <module>
    from problem_setup import get_geometry
  File "/home/benjamin/Programming/fenicsx-adapter/tutorials/partitioned-heat-conduction/fenicsx/problem_setup.py", line 5, in <module>
    from dolfinx.mesh import DiagonalType, create_rectangle
ImportError: cannot import name 'DiagonalType' from 'dolfinx.mesh' (/home/benjamin/.local/lib/python3.8/site-packages/dolfinx/mesh.py)

I remember that this worked before. Is this maybe related to an update on the FEniCS-X side? Which version am I supposed to use for testing? We should also document the compatible version for the adapter (maybe even check that the right version is used when installing the adapter. See here.). Note: You can check the version via python3 -c "import dolfinx; print(dolfinx.__version__)". I'm using version 0.3.0.

arvedes commented 2 years ago

The main issue here seems to be that I took a newer version of dolfinx because of this issue. I'm using commit 358470227b in my setup. Once there is a new release we should fix the tests and document / check the version as you suggested!

precice-bot commented 2 years ago

This pull request has been mentioned on preCICE Forum on Discourse. There might be relevant details there:

https://precice.discourse.group/t/precice-workshop-2022-introduce-yourself/904/30

BenjaminRodenberg commented 2 years ago

As soon as the tests are working, we can merge this PR. Waiting for #6, because this will make it easier to run the tests on our development version. I can take care of the tests then.

arvedes commented 2 years ago

@BenjaminRodenberg can you have a look at the remaining failing tests? One is failing on purpose, but I'm not sure about the other one.

arvedes commented 2 years ago

@BenjaminRodenberg I had to remove the assert statement that we added when we looked at the code together - it would not work for the case with vector valued data.

I could resolve the SIGSEGV - it was due to wrong usage of the coupling expression. As it is now a derived of Function instead of Expression (which does not exist any longer) it is not callable any longer - I had to use eval here.

There is only one failing test remaining, it's the one with the 'not tested' error. Do you want to fix that before merging?

arvedes commented 2 years ago

I had to use a very dirty fix in adapter_core.convert_fenicsx_to_precice(...) because I didn't find a proper way to evaluate the function at the mesh coordinates. Do you have a better idea how to do that?

What do you think about using the dofs throughout the adapter? I think it'd be the natural way of coupling in dolfinx and it would simplify the code a lot.

boris-martin commented 2 years ago

@arvedes @BenjaminRodenberg How close is this PR to completion ? I can give a hand here.

What do you think about using the dofs throughout the adapter? I think it'd be the natural way of coupling in dolfinx and it would simplify the code a lot.

Are all the elements supported elements where a DOF always matches a node value ? I read that no C1 (Hermite) elements were supported (which is good for us I imagine), but I've only ever used Lagrangian elements. But I agree that it's probably simpler. However setting preCICE connectivity will be painful :see_no_evil:

Apart from that, I noticed there was a todo in the error computation upgrade of the tutorial. I got inspiration from this tutorial and rewrote it:

from dolfinx.fem import Expression, form, Function, FunctionSpace, assemble_scalar
from mpi4py import MPI
from ufl import dx
import ufl

def compute_errors(u_approx, u_ref, v, total_error_tol=10 ** -4, degree_raise=3):

    # Project into higher dimension space before dividing
    W = FunctionSpace(u_approx.function_space.mesh, (v.ufl_element(
    ).family(), v.ufl_element().degree() + degree_raise))

    u_approx_W = Function(W)
    u_approx_W.interpolate(u_approx)

    u_ref_W = Function(W)
    if isinstance(u_ref, ufl.core.expr.Expr):
        u_expr = Expression(u_ref, W.element.interpolation_points)
        u_ref_W.interpolate(u_expr)
    else:
        u_ref_W.interpolate(u_ex)

    # compute pointwise L2 error
    error_normalized = abs(u_ref_W - u_approx_W) / u_ref_W
    # Transform pointwise error into an expression sampled on v
    error_expr = Expression(error_normalized, W.element.interpolation_points)

    # Project into W
    error_pointwise = Function(W)
    error_pointwise.interpolate(error_expr)
    error_pointwise.name = "error"

    # Integrate to compute L2 error norm
    error_form = form(error_pointwise**2 * dx)
    error_total = (u_approx.function_space.mesh.comm.allreduce(
        assemble_scalar(error_form), MPI.SUM))**0.5

    assert (error_total < total_error_tol)

    return error_total, error_pointwise

It's a bit more involved but it should do the trick. (I don't have write access to this repo yet)

boris-martin commented 2 years ago

This new error computation yields approximately 1e-6 as error compared to 1e-12 in FEniCS. I don't know if it's related to the projection into a higher space. This happens regardless of whether the order is raised or not. But that's still essentially zero I guess. (Relative error is focused on the boundary, which is a good sign?)

IshaanDesai commented 2 years ago
 error_normalized = abs(u_ref_W - u_approx_W) / u_ref_W

Are you sure about the abs here? The previous calculation did not have this. Perhaps it makes sense to remove this and check again.

As far as the error magnitude goes, actually 1e-06 is not that less as we know that we are able to calculate the analytical solution numerically and hence expect the error difference to really be as low as 1e-12. This is a unique feature of this problem, which makes it a case where we can really test harshly for numerical and software errors. So lets figure out why we are not able to get the difference magnitude of 1e-12.

boris-martin commented 2 years ago
 error_normalized = abs(u_ref_W - u_approx_W) / u_ref_W

Are you sure about the abs here? The previous calculation did not have this. Perhaps it makes sense to remove this and check again.

Yep. See https://github.com/precice/tutorials/blob/master/partitioned-heat-conduction/fenics/errorcomputation.py. And it makes sense as signed error is less interesting to see in Paraview. As for the L2 error, since it gets squared anyway, it shouldn't matter.

As far as the error magnitude goes, actually 1e-06 is not that less as we know that we are able to calculate the analytical solution numerically and hence expect the error difference to really be as low as 1e-12. This is a unique feature of this problem, which makes it a case where we can really test harshly for numerical and software errors. So lets figure out why we are not able to get the difference magnitude of 1e-12.

I'm aware that the solution should be exact, but the inaccuracy seems to come from the integration & projection into different function spaces. I'm pretty sure that if we didn't compute integrals of the error but pointwise errors on nodes we would get much less.

arvedes commented 2 years ago

Hi Boris, thanks for your support!

Are all the elements supported elements where a DOF always matches a node value ? I read that no C1 (Hermite) elements were supported (which is good for us I imagine), but I've only ever used Lagrangian elements. But I agree that it's probably simpler. However setting preCICE connectivity will be painful 🙈

I'm really not an expert in that, unfortunately. Just by looking at the dolfin-x tutorials it seemed to be the natural way. Where do you think would the preCICE pain be?

How close is this PR to completion ? I can give a hand here.

I'd be very helpful if you could implement the partitioned-heat-conduction-complex tutorial in dolfin-x, with it we can probably find many of the remaining issues.

I'm not sure how far we should go here, in my opinion we can merge this PR soon, even though there are still quite some # TODO in the code.

boris-martin commented 2 years ago

Hi Boris, thanks for your support!

Are all the elements supported elements where a DOF always matches a node value ? I read that no C1 (Hermite) elements were supported (which is good for us I imagine), but I've only ever used Lagrangian elements. But I agree that it's probably simpler. However setting preCICE connectivity will be painful :see_no_evil:

I'm really not an expert in that, unfortunately. Just by looking at the dolfin-x tutorials it seemed to be the natural way. Where do you think would the preCICE pain be?

Let's stick in 2D for now. Imagine you use regular (Lagrangian) triangle elements and linear cell interpolation mapping (for volume coupling:

For these we could subdivide ourself (annoying to code but quite straightforward), but how to handle, say Crouzeix-Raviart elements ? On these, DOFs live on edge centers only.

For Hermite elements, some DOFs represent values of the derivatives. This would be hard to support,even without connectivity. A possibility would be to split data and data derivatives as two different things in preCICE. But I'm no expert there.

How close is this PR to completion ? I can give a hand here.

I'd be very helpful if you could implement the partitioned-heat-conduction-complex tutorial in dolfin-x, with it we can probably find many of the remaining issues.

I'm not sure how far we should go here, in my opinion we can merge this PR soon, even though there are still quite some # TODO in the code.

I can have a look at that tutorial :) Ideally I'd open a PR once this is merged to have cleaner discussions. If needed I can look at the remaining todos. Let's just be sure we document all remaining things unsolved.

arvedes commented 2 years ago

Let's stick in 2D for now. Imagine you use regular (Lagrangian) triangle elements and linear cell interpolation mapping (for volume coupling:

  • In first order, everything is good, the DOFs are exactly located at the vertices of the triangle.
  • In second order triangles, you have 6 DOFs (vertices and edge centers). If you set up one triangle with the 3 main vertices, consistent mappings would drop half the information and lose accuracy, and conservative mappings will only apply loads to a subset of points.
  • Idem in 3rd order, with 10 DOFs leading to 10 inner triangles.

For these we could subdivide ourself (annoying to code but quite straightforward), but how to handle, say Crouzeix-Raviart elements ? On these, DOFs live on edge centers only.

For Hermite elements, some DOFs represent values of the derivatives. This would be hard to support,even without connectivity. A possibility would be to split data and data derivatives as two different things in preCICE. But I'm no expert there.

But wouldn't it be possible (assuming Lagrangian Elements for now) to directly communicate the DOF-coordinates to preCICE, which does the mapping as usual, and then directly set these values at the DOFs? Similar to this code snippet from the tutorial:

u_D = fem.Function(V)
u_D.interpolate(u_exact)  # u_exact is a python function
tdim = domain.topology.dim
fdim = tdim - 1
domain.topology.create_connectivity(fdim, tdim)
boundary_facets = mesh.exterior_facet_indices(domain.topology) 
bc = fem.dirichletbc(u_D, fem.locate_dofs_topological(V, fdim, boundary_facets))

I can have a look at that tutorial :)

Great!

Ideally I'd open a PR once this is merged to have cleaner discussions. If needed I can look at the remaining todos. Let's just be sure we document all remaining things unsolved.

Good idea. We should also setup a list with the remaining TODOs to limit the scope of the PR. I think most of the goals that @BenjaminRodenberg and me defined when we started working on that are already implemented.

IshaanDesai commented 2 years ago

Let's stick in 2D for now. Imagine you use regular (Lagrangian) triangle elements and linear cell interpolation mapping (for volume coupling

I would suggest sticking with 2D and also ignoring linear cell interpolation mapping for now, lets first make sure we have the already existing nearest-projection mapping related functionality working. Lets first get the adapter to work exactly as it works for FEniCS, and then proceed from there.

Regarding handling of higher order elements, the fenics-adapter only defines DOFs which are coinciding with the physical vertices as part of the coupling interface. This is basically the geometry-based interface. As stated there, it is true that one would loose information by ignoring the higher order DOFs, but this is what we did back then and I think we should first get this to work and then deal with the problem of higher order elements or elements having only edge-based DOFs. This essentially means that we are restricted to linear elements to get accurate results, which is fine for the partitioned-heat-conduction tutorial.

I will write another message with a list of TODOs. An idea to handle this properly would be to open issues for every major TODO and then resolve them via independent PRs to have a clean workflow.

arvedes commented 2 years ago

I would suggest sticking with 2D and also ignoring linear cell interpolation mapping for now, lets first make sure we have the already existing nearest-projection mapping related functionality working. Lets first get the adapter to work exactly as it works for FEniCS, and then proceed from there.

Good idea let's not make thinks to complicated now!

Regarding handling of higher order elements, the fenics-adapter only defines DOFs which are coinciding with the physical vertices as part of the coupling interface. This is basically the geometry-based interface. As stated there, it is true that one would loose information by ignoring the higher order DOFs, but this is what we did back then and I think we should first get this to work and then deal with the problem of higher order elements or elements having only edge-based DOFs. This essentially means that we are restricted to linear elements to get accurate results, which is fine for the partitioned-heat-conduction tutorial.

I think this was a good decision for the legacy-fenics, for the new one I've got the feeling that we can simplify things a lot by using the DOFs throughout the adapter. But let's get back to that later!

I will write another message with a list of TODOs. An idea to handle this properly would be to open issues for every major TODO and then resolve them via independent PRs to have a clean workflow.

Great, thanks!

IshaanDesai commented 2 years ago

I am trying to get a hang of everything that has been changed and I realized that vector data cannot be handled right? The function convert_fenicsx_to_precice does not work for vector data, and this is seen from the failing unit test for the same function. Any ideas what is going wrong there?

arvedes commented 2 years ago

Is this the not-implemented-error? I think Benjamin and me never touched that.

BenjaminRodenberg commented 2 years ago

I just tried to catch up a bit on this PR. Nice that there is so much going on here and good to see @boris-martin joining the party :tada:

One important feature of the fenics-adapter that was not mentioned above (it's also poorly documented): The interpolation routine uses segregated RBF interpolation. The algorithm is described in Lindner, F., Mehl, M., & Uekermann, B. (2017). Radial basis function interpolation for black-box multi-physics simulations. and it is also availabe in preCICE.

Short explanation of the algorithm

You create a least-squares fit to the data on the coupling mesh. This gives you a second-degree global polynomial (see here). Usually this will still give you some interpolation error (if you are not very lucky). To eliminate this error and create an interpolation that goes exactly through all the available data, you use an RBF interpolation of the error. The resulting interpolant is then polynomial fit + RBF interpolation of the error.

Special case: partitioned heat equation

With the partitioned heat equation we are actually lucky. This means: There is no error in the polynomial fit. Remember: Our analytical solution is second degree. Therefore, our second-degree least squares fit is exact. Nice feature: We are not only exact on the DoFs, but also between the DoFs. Since our expression is a function, we now have an exact representation of the solution at the coupling interface. This should also work with higher-order elements.

This is, of course, a very special case: It only works that nicely for a second degree polynomial solution. But as @IshaanDesai mentioned: This case is mainly about high accuracy and testing. Not so much about generality (however, it also works for the complex geometry case, because it is independent from the mesh).

Reasons for choosing this algorithm

I chose this algorithm mainly for two reasons:

  1. With the polynomial fit we can represent the analytical solution of the heat equation exactly.
  2. RBF allows us to perform interpolation without having to provide connectivity information. An alternative I considered here is a piecewise linear interpolation (basically neareast projection-like), but this would have resulted in a more complicated implementation.

Possible pitfall for FEniCS-X

I'm not sure about the implementation in FEniCS-X. When I implemented this for FEniCS I relied on the fact that the expression is an object that hides the finite elements of participant A from B and vice-versa. As long as the expression represents the exact solution everything is fine. I don't know whether this is still possible for FEniCS-X. If you cannot provide a functional representation of the data, but you have to use a pointwise representation you will need some additional magic to ensure that you do not just perform a piecewise linear interpolation. A piecewise linear interpolation will break as soon as you use higher-order elements, because they need information on mesh-nodes and between mesh-nodes.

Some ideas for debugging

boris-martin commented 2 years ago

I am trying to get a hang of everything that has been changed and I realized that vector data cannot be handled right? The function convert_fenicsx_to_precice does not work for vector data, and this is seen from the failing unit test for the same function. Any ideas what is going wrong there?

I think I found the issue. Essentially it boils down to "we don't handle the vector case": the (so-called) dirty fix from @arvedes didn't take in account the vector case when building the mask, so data was cropped. I haven't fixed it yet because there are still details to work on, but I'll handle it in the coming days.

By they way, speaking of the dirty fix, I had a look at the changelog of Dolfinx and it says:

dolfinx.fem.Function.compute_point_values has been deprecated. Interpolation into a CG-1 is now the way of getting vertex values.

This could replace the dirty fix, I'll think about it. (compute_point_values is apparently the equivalent of FEniCS's compute_vertex_values, used in the FEniCS adapter).

If we project in a CG-1 space, mapping is easy: in dimension d, the i-th component at vertex n would be given by f_cg1.x.array[d*n+i]. But we first need to check that DOF ordering in that CG1 space matches vertices ordering. Or, we could build the mesh from a "coarsened" function space instead of the mesh itself. Further investigations needed :)