FEniCS / dolfinx

Next generation FEniCS problem solving environment
https://fenicsproject.org
GNU Lesser General Public License v3.0
723 stars 177 forks source link

Lagrange Multiplier for the Poisson equation with pure Neumann boundary conditions #3121

Open WjjSteve opened 4 months ago

WjjSteve commented 4 months ago

Describe new/missing feature

It is known that the Poisson equation has infinite solutions when the boundary conditions like pure neuman are considered. In the Legacy DOLFIN, we use the Lagrange Multiplier to add extral constraint, which make the system has a unique solution, whose average is 0, see more details in demo.

However, the implementation of Lagrange Multiplier involves the Real number function space, which is not avaliable in DOLFINx, so it has been a problem. Now I would like to propose a method to implement the Lagrange Multiplier, this method does not require any new feature, and just use all the functional in current DOLFINX.

Let us consider the following Variational form of Poisson equation: Screenshot from 2024-04-09 11-42-13 The LHS of the above VF results the following matrix: Screenshot from 2024-04-09 11-49-57 we can see that, when we apply the Lagrange Multiplier, the resulting matrix is just the matrix given by regular Poisson solver, with one more column and row (marked as red), because the basis function of Real number space is just 1. We can easily obtain the red vector, by defining the following linear form in DOLFINX:

L = v * dx where v is test function in the function space, and then if we assemble the vector based on this linear form, we can obtain the vector that is going to added to the (m+1)th row and (m+1)th column, if the size of the original matrix in m by m.

I have a demo to show the implementation and performance of the idea. Basically, it is adoped from the Poisson demo in DOLFINX library, but I just remove all the Dirichlet BC, which makes the syetem ill-posed. I can obtain a unique solution with the method mentioned above, and this method can be applied with DOLFINX_MPC, there is no conflict. All the codes are written in DOLFINX v0.7.3.

I believe the best way to implement the above idea is to use the nested operator, since in that case, the solution vector is blocked from the addition unknow value, and we can easily split the solution. I also have some codes using nested matrix and vector, and I can share it as well. But I can not use direct solver for nested operator, and the iteration solvers sometimes produce bad solution (not stable performance). I'm not that familiar with PETSc, but I guess this issue can be resolved by using proper PCtype and KSPtype. Also, the current code just works in series.

Any comments or suggests are welcomed.

Suggested user interface

# This demo is modified from this demo https://github.com/FEniCS/dolfinx/blob/v0.7.3/python/demo/demo_poisson.py,
# but remove the Dirichlet BC 

# ## Equation and problem definition
#
# For a domain $\Omega \subset \mathbb{R}^n$ with boundary $\partial
# \Omega = \Gamma_{D} \cup \Gamma_{N}$, the Poisson equation with
# particular boundary conditions reads:
#
# $$
# - \nabla^{2} u &= f \quad {\rm in} \ \Omega, \\
# u &= 0 \quad {\rm on} \ \Gamma_{D}, \\
# \nabla u \cdot n &= g \quad {\rm on} \ \Gamma_{N}. \\
# $$
#
# where $f$ and $g$ are input data and $n$ denotes the outward directed
# boundary normal. The variational problem reads: find $u \in V$ such
# that
#
# $$
# a(u, v) = L(v) \quad \forall \ v \in V,
# $$
#
# where $V$ is a suitable function space and
#
# $$
# a(u, v) &:= \int_{\Omega} \nabla u \cdot \nabla v \, {\rm d} x, \\
# L(v)    &:= \int_{\Omega} f v \, {\rm d} x + \int_{\Gamma_{N}} g v \, {\rm d} s.
# $$
#
# The expression $a(u, v)$ is the bilinear form and $L(v)$
# is the linear form. 

from mpi4py import MPI
from petsc4py import PETSc

# +
import numpy as np

import ufl
from dolfinx import la
from dolfinx import fem, io, mesh, plot
from dolfinx.fem.petsc import create_matrix, create_vector, assemble_vector, assemble_matrix_mat, apply_lifting
from dolfinx.fem.forms import form as _create_form
from dolfinx.fem.function import Function
from ufl import ds, dx, grad, inner

# -

# Note that it is important to first `from mpi4py import MPI` to
# ensure that MPI is correctly initialised.

# We create a rectangular {py:class}`Mesh <dolfinx.mesh.Mesh>` using
# {py:func}`create_rectangle <dolfinx.mesh.create_rectangle>`, and
# create a finite element {py:class}`function space
# <dolfinx.fem.FunctionSpace>` $V$ on the mesh.

# +
msh = mesh.create_rectangle(comm=MPI.COMM_WORLD,
                            points=((0.0, 0.0), (2.0, 1.0)), n=(32, 16),
                            cell_type=mesh.CellType.triangle)
V = fem.functionspace(msh, ("Lagrange", 1))
# -

# The second argument to {py:func}`functionspace
# <dolfinx.fem.functionspace>` is a tuple `(family, degree)`, where
# `family` is the finite element family, and `degree` specifies the
# polynomial degree. In this case `V` is a space of continuous Lagrange
# finite elements of degree 1.
#
# To apply the Dirichlet boundary conditions, we find the mesh facets
# (entities of topological co-dimension 1) that lie on the boundary
# $\Gamma_D$ using {py:func}`locate_entities_boundary
# <dolfinx.mesh.locate_entities_boundary>`. The function is provided
# with a 'marker' function that returns `True` for points `x` on the
# boundary and `False` otherwise.

# +
u = ufl.TrialFunction(V)
v = ufl.TestFunction(V)
x = ufl.SpatialCoordinate(msh)
f = 10 * ufl.exp(-((x[0] - 0.5) ** 2 + (x[1] - 0.5) ** 2) / 0.02)
g = ufl.sin(5 * x[0])
a = inner(grad(u), grad(v)) * dx
L = inner(f, v) * dx + inner(g, v) * ds

# Create the bilinear and linear forms for the Poisson equation
# and assemlbe the matrix and vector
a_form = _create_form(a)
A = create_matrix(a_form)
L_form = _create_form(L)
b = create_vector(L_form)
assemble_matrix_mat(A, a_form)
A.assemble()
b.set(0)
assemble_vector(b, L_form)

# Create the linear form vdx, and assemble the vector, this vector will be added to the stiffness matrix of Poisson equation
Lt =  _create_form(v * dx)
bt = create_vector(Lt)
bt.setFromOptions()
bt.set(0)
assemble_vector(bt, Lt)

# Create the nested matrix based on the stiffness matrix
# nested_A = [A    bt
#             bt'  0]
m, n = A.getSize()
nested_A = PETSc.Mat().create()
nested_A.setSizes([m+1, n+1])
nested_A.setFromOptions()
nested_A.setUp()
idxm = list(range(m))
idxn = list(range(n))
nested_A.setValues(idxm, idxn, A.getValues(idxm, idxn))
nested_A.setValues(idxm, [n], bt.getValues(list(range(bt.getSize()))))
nested_A.setValues([m], idxn, bt.getValues(list(range(bt.getSize()))))
nested_A.setValue(m, n, 0.0)
nested_A.assemblyBegin()
nested_A.assemblyEnd()

# Create the nested vector based on the RHS 
# nested_b = [b
#             0]
nested_b = PETSc.Vec().create()
nested_b.setSizes(m+1)
nested_b.setUp()
nested_b.setValues(idxm, b.getValues(list(range(b.getSize()))))
nested_b.setValue(m, 0.0)
nested_b.assemblyBegin()
nested_b.assemblyEnd()
nested_b.setUp()

# Solve the system and otain the solution 
uh = Function(V)
solver = PETSc.KSP().create(msh.comm)
solver.setOperators(nested_A)
solver.setFromOptions()
solver.solve(nested_b, nested_b)

x = la.create_petsc_vector_wrap(uh.x)
x.setValues(idxm, b.getValues(list(range(nested_b.getSize()-1))))

# +
with io.XDMFFile(msh.comm, "out_poisson/poisson.xdmf", "w") as file:
    file.write_mesh(msh)
    file.write_function(uh)
# -

# and displayed using [pyvista](https://docs.pyvista.org/).

# +
try:
    import pyvista
    cells, types, x = plot.vtk_mesh(V)
    grid = pyvista.UnstructuredGrid(cells, types, x)
    grid.point_data["u"] = uh.x.array.real
    grid.set_active_scalars("u")
    plotter = pyvista.Plotter()
    plotter.add_mesh(grid, show_edges=True)
    warped = grid.warp_by_scalar()
    plotter.add_mesh(warped)
    if pyvista.OFF_SCREEN:
        pyvista.start_xvfb(wait=0.1)
        plotter.screenshot("uh_poisson.png")
    else:
        plotter.show()
except ModuleNotFoundError:
    print("'pyvista' is required to visualise the solution")
    print("Install 'pyvista' with pip: 'python3 -m pip install pyvista'")
# -
nate-sime commented 4 months ago

If others agree this is a good idea to include as a demo, I can help you with the matrix assembly.

bleyerj commented 4 months ago

I think this would be a good idea if we can come up with an efficient linear solve (using PETSc Schur complement fieldsplit ?)

WjjSteve commented 4 months ago

I think this would be a good idea if we can come up with an efficient linear solve (using PETSc Schur complement fieldsplit ?) Yes, I think the solver is the main concern that need consideration. I have tried to define the operator as nested matrix and then solve it with PC-Type : "fieldsplit" and some KSP solvers like "gmres" and "cg", but the performance is not as good as "LU".

jhale commented 4 months ago

@WjjSteve I wouldn't necessarily expect the performance (runtime) to be as good as LU, particularly small problems. Instead the key questions are:

  1. Does it solve in a reasonable number of inner and outer iterations -ksp_view? Reasonable would be e.g. 50, not many 100s or 1000s.
  2. Does the number of iterations stay roughly bounded on mesh refinement?
jhale commented 4 months ago

Also, does your example work when running using MPI?

WjjSteve commented 4 months ago

@WjjSteve I wouldn't necessarily expect the performance (runtime) to be as good as LU, particularly small problems. Instead the key questions are:

  1. Does it solve in a reasonable number of inner and outer iterations -ksp_view? Reasonable would be e.g. 50, not many 100s or 1000s.
  2. Does the number of iterations stay roughly bounded on mesh refinement?

I'm not sure I fully understand your question or not. I tried it with "gmres" and "hypre" and the mesh size is 100 by 100, I used the function KSP.getIterationNumber(), and the result is just 1. Maybe I didn't check it in the correct way, please let me know if you want me to test anyting else.

Also, does your example work when running using MPI?

Unfortunately no, this is just a serier code.

jhale commented 4 months ago

You'll need to post your example in full for the preconditioners - could you open a pull request with your example as a demo_*.py file?

We can't accept features/demos that do not work using MPI. Perhaps there are ways to get this approach working, it would need some thought.

jhale commented 4 months ago

We're going to take a look at the MPI aspect of this this for you @WjjSteve.

WjjSteve commented 4 months ago

We're going to take a look at the MPI aspect of this this for you @WjjSteve.

Sure I understand. Maybe it is easier to use MPI when nested operators are applied, I will make a demo for that and open a pull.

WjjSteve commented 4 months ago

We're going to take a look at the MPI aspect of this this for you @WjjSteve. Hello, I have made a demo using nested matrix and vector for the above algorithm, should I make a pull request as a new branch? There are a lot of branchs and tags, and I don't want to mess anything.