ddemidov / amgcl

C++ library for solving large sparse linear systems with algebraic multigrid method
http://amgcl.readthedocs.org/
MIT License
727 stars 111 forks source link

Can I set the number of levels in AMGCL? #152

Closed doumifeiniao closed 4 years ago

doumifeiniao commented 4 years ago

Hi Denis,

Thanks for your great work of AMGCL, I have found it very useful. But I wonder if there is a way to set the number of levels in AMGCL. Currently, the number of levels seems to be automatically taken based on the size of linear equations.

Besides, I wonder if there is a list for prm.put("key", val). I didn't find such list in the AMGCL documentation.

Best regards, Wenhao Xu

ddemidov commented 4 years ago

See https://amgcl.readthedocs.io/en/latest/design.html#parameters for description of how params struct is defined in amgcl. If you have are using amgcl::make_solver<amgcl::amg, amgcl::solver::cg> as a solver, then params has two fields, precond for amg and solver for cg, and those fields have types of amg::params and cg::params correspondingly.

So, to answer you first question, you can control the depth of the amg hierarchy by setting params.precond.max_levels. Why exactly would you want to do that though?

doumifeiniao commented 4 years ago

Hi Denis,

Thanks for your rapid reply, and your solution works. The reason I want to set precond.max_levels is that AMGCL falis to converge for my current linear equations. So I am trying to adjust different parameters of AMGCL including reducing the number of levels.

ddemidov commented 4 years ago

Usually changing the number of levels does not help with the convergence. Unless the problem is defined up to an additive constant and thus is singular (for example, there are only Neumann type boundary conditions). In this case, it may help to reduce the number of levels simply because amgcl will not use a direct solver if the coarsest level is too large, and will use a couple of smoother iterations instead. But you can achieve the same effect easier with prm.precond.direct_coarse.

What kind of problem are you solving?

doumifeiniao commented 4 years ago

Hi Denis,

I have used prm.put("precond.direct_coarse", false). I am solving 3D Helmholtz equation by finite difference frequency domain method. I found it seems difficult for multigrid method to converge for this problem.

ddemidov commented 4 years ago

I am solving 3D Helmholtz equation by finite difference frequency domain method

Does that mean the system matrix has 3x3 block structure? You can try to tell the coarsening about that with prm.precond.coarsening.aggr.block_size=3. Or, you can use block-valued backend (backend::builtin<amgcl::static_matrix<double, 3,3>> instead of backend::builtin<double>).

doumifeiniao commented 4 years ago

Yeah, the system matrix does has 3x3 block structure. But use prm.put("precond.coarsening.aggr.block_size", 3) doesn't make much difference for my problem. However, I found fgmres+damped_jacobi in AMGCL works perfectly for my problem by using prm.put("precond.max_levels", 1). Really appreciate your help, I have learned a lot about AMGCL by this discussion. And thanks again for your great work of AMGCL.

ddemidov commented 4 years ago

With max_levels=1 you are basically getting fgmres preconditioned with single-level damped jacobi. You can achieve that with amgcl::relaxation::as_preconditioner<Backend, amgcl::relaxation::damped_jacobi> instead of amgcl::amg.

Can you share your system matrix and the RHS in the MatrixMarket format? (you can use amgcl::io::mm_write for that). I'd like to see if there is a way to make it work in a multilevel setup.

doumifeiniao commented 4 years ago

Hi Denis,

The corresponding system matrix and RHS in the MatrixMarket format can be downloaded at "https://www.researchgate.net/publication/342344344_FDFD_linear_equations". The degree of freedom (DOF) is 1e6. It will be great if you can find a way to make it work in a multilevel setup.

ddemidov commented 4 years ago

I can not make it work, unfortunately. The fgmres preconditioned with damped_jacobi that you used seems to be stalling at the relative residual of around 1e-3 (the code for the solver_complex is here):

$ ./solver_complex -B -A FDFD_system_matrix.bin -f FDFD_RHS.bin \
    -p solver.type=fgmres precond.type=damped_jacobi -1
Solver
======
Type:             FGMRES(30)
Unknowns:         1000000
Memory footprint: 946.06 M

Preconditioner
==============
Relaxation as preconditioner
  Unknowns: 1000000
  Nonzeros: 26463592
  Memory:   628.59 M

Iterations: 100
Error:      0.00274439

[Profile:      11.484 s] (100.00%)
[  reading:     0.315 s] (  2.75%)
[  setup:       0.462 s] (  4.03%)
[  solve:      10.700 s] ( 93.18%)

And in fact, the damped_jacobi preconditioner does not do much at all. The dummy preconditioner (that is, no preconditioner at all) gives similar (even slightly better) results:

$ ./solver_complex -B -A FDFD_system_matrix.bin -f FDFD_RHS.bin \
    -p solver.type=fgmres precond.class=dummy
Solver
======
Type:             FGMRES(30)
Unknowns:         1000000
Memory footprint: 946.06 M

Preconditioner
==============
identity matrix as preconditioner
  unknowns: 1000000
  nonzeros: 26463592

Iterations: 100
Error:      0.0026955

[Profile:      10.921 s] (100.00%)
[  reading:     0.313 s] (  2.87%)
[  setup:       0.437 s] (  4.00%)
[  solve:      10.166 s] ( 93.09%)

I don't have any experience with this kind of problems, but it looks like creating a preconditioner for a Helmholtz problem is a non-trivial task: e.g. https://doi.org/10.1016/j.apnum.2009.09.003.

As a side note, the system does not look like a 3x3 block-structured one (also the number of DOFs is not divisible by 3).

doumifeiniao commented 4 years ago

Hi Denis,

Thanks for your further help. The dummy preconditioner does work better. And the system dost not have 3x3 block structure, sorry about mistake. I will try to read more papers about preconditioning Helmholtz problem such as the one you mentioned. Further more, you have really made a highly efficient implementation of fgmres in AMGCL. Thanks for your great work again.

Best wishes.