firedrakeproject / firedrake

Firedrake is an automated system for the portable solution of partial differential equations using the finite element method (FEM)
https://firedrakeproject.org
Other
495 stars 157 forks source link

BUG: Failure to compose SCPC with geometric multigrid #2886

Open pbrubeck opened 1 year ago

pbrubeck commented 1 year ago

The following script attempts to use static condensation (SCPC) as the outer solver and geometric multigrid as the inner solver to solve a continuous Lagrange formuation of Poisson.

from firedrake import *
from firedrake.petsc import PETSc
PETSc.Sys.popErrorHandler()

nx = 10
refine = 1
mesh = UnitSquareMesh(nx, nx)
mh = MeshHierarchy(mesh, refine)
mesh = mh[-1]

degree = 3
e = FiniteElement("Lagrange", cell=mesh.ufl_cell(), degree=degree)
V = FunctionSpace(mesh, MixedElement([e["interior"], e["facet"]]))

test = sum(TestFunctions(V))
trial = sum(TrialFunctions(V))
a = inner(grad(test), grad(trial)) * dx
L = inner(test, Constant(1)) * dx
bcs = [DirichletBC(V.sub(1), 0, "on_boundary")]

sparams = {
    "ksp_type": "preonly",
    "pc_type": "python",
    "mat_type": "matfree",
    "pc_python_type": "firedrake.SCPC",
    "pc_sc_eliminate_fields": "0",
    "condensed_field": {
        "mat_type": "aij",
        "ksp_monitor": None,
        "ksp_type": "gmres",
        "pc_type": "mg",
        "mg_coarse": {
            "ksp_type": "preonly",
            "pc_type": "none",},
        "mg_levels": {
            "ksp_type": "preonly",
            "pc_type": "none",},
    },
}

uh = Function(V)
problem = LinearVariationalProblem(a, L, uh)
solver = LinearVariationalSolver(problem, solver_parameters=sparams, options_prefix="")
try:
    solver.solve()
except PETSc.Error:
    pass
solver.snes.ksp.pc.view()

The code fails with this error message:

[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------
[0]PETSC ERROR: Nonconforming object sizes
[0]PETSC ERROR: Preconditioner number of local rows 2921 does not equal input vector size 761
[0]PETSC ERROR: #1 PCApply() at /scratch/brubeckmarti/firedrake-dev-20221228-mpich/src/petsc/src/ksp/pc/interface/precon.c:434
[0]PETSC ERROR: #2 KSP_PCApply() at /scratch/brubeckmarti/firedrake-dev-20221228-mpich/src/petsc/include/petsc/private/kspimpl.h:378
[0]PETSC ERROR: #3 KSPSolve_PREONLY() at /scratch/brubeckmarti/firedrake-dev-20221228-mpich/src/petsc/src/ksp/ksp/impls/preonly/preonly.c:21
[0]PETSC ERROR: #4 KSPSolve_Private() at /scratch/brubeckmarti/firedrake-dev-20221228-mpich/src/petsc/src/ksp/ksp/interface/itfunc.c:897
[0]PETSC ERROR: #5 KSPSolve() at /scratch/brubeckmarti/firedrake-dev-20221228-mpich/src/petsc/src/ksp/ksp/interface/itfunc.c:1069
[0]PETSC ERROR: #6 PCMGMCycle_Private() at /scratch/brubeckmarti/firedrake-dev-20221228-mpich/src/petsc/src/ksp/pc/impls/mg/mg.c:28
[0]PETSC ERROR: #7 PCMGMCycle_Private() at /scratch/brubeckmarti/firedrake-dev-20221228-mpich/src/petsc/src/ksp/pc/impls/mg/mg.c:84
[0]PETSC ERROR: #8 PCApply_MG_Internal() at /scratch/brubeckmarti/firedrake-dev-20221228-mpich/src/petsc/src/ksp/pc/impls/mg/mg.c:611

I suspect that the routines to construct interpolators are not set up, and the fine KSP receives a coarse RHS. Another possiblity is that the coarse matrix is discretized directly from the fine Jacobian, as it could be the case that coarsening is not implemented for SLATE Tensors.

pbrubeck commented 1 year ago

This is the output of solver.snes.ksp.pc.view()

PC Object: () 1 MPI process
  type: python
    Python: firedrake.SCPC
  Firedrake custom preconditioner SCPC
  Solving linear system using static condensation.
  KSP Object: (condensed_field_) 1 MPI process
    type: gmres
      restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
      happy breakdown tolerance 1e-30
    maximum iterations=10000, initial guess is zero
    tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
    left preconditioning
    using PRECONDITIONED norm type for convergence test
  PC Object: (condensed_field_) 1 MPI process
    type: mg
      type is MULTIPLICATIVE, levels=2 cycles=v
        Cycles per PCApply=1
        Not using Galerkin computed coarse grid matrices
    Coarse grid solver -- level 0 -------------------------------
      KSP Object: (condensed_field_mg_coarse_) 1 MPI process
        type: preonly
        maximum iterations=10000, initial guess is zero
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
        left preconditioning
        using NONE norm type for convergence test
      PC Object: (condensed_field_mg_coarse_) 1 MPI process
        type: none
        linear system matrix = precond matrix:
        Mat Object: 1 MPI process
          type: seqaij
          rows=2921, cols=2921
          total: nonzeros=46601, allocated nonzeros=46601
          total number of mallocs used during MatSetValues calls=0
            using I-node routines: found 1677 nodes, limit used is 5
    Down solver (pre-smoother) on level 1 -------------------------------
      KSP Object: (condensed_field_mg_levels_1_) 1 MPI process
        type: preonly
        maximum iterations=2, nonzero initial guess
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
        left preconditioning
        using NONE norm type for convergence test
      PC Object: (condensed_field_mg_levels_1_) 1 MPI process
        type: none
        linear system matrix = precond matrix:
        Mat Object: (condensed_field_) 1 MPI process
          type: seqaij
          rows=2921, cols=2921
          total: nonzeros=46601, allocated nonzeros=46601
          total number of mallocs used during MatSetValues calls=0
            using I-node routines: found 1677 nodes, limit used is 5
    Up solver (post-smoother) same as down solver (pre-smoother)
    linear system matrix = precond matrix:
    Mat Object: (condensed_field_) 1 MPI process
      type: seqaij
      rows=2921, cols=2921
      total: nonzeros=46601, allocated nonzeros=46601
      total number of mallocs used during MatSetValues calls=0
        using I-node routines: found 1677 nodes, limit used is 5
  Locally reconstructing unknowns.
  linear system matrix = precond matrix:
  Mat Object: () 1 MPI process
    type: python
    rows=3721, cols=3721
        Python: firedrake.matrix_free.operators.ImplicitMatrixContext
      Firedrake matrix-free operator ImplicitMatrixContext

Seems like the coarse space has the same dimension as the fine space.

pbrubeck commented 1 year ago

Could it be that the Jacobian does not get coarsened when it is a slate.Tensor?