UG4 / ugcore

The core functionality of UG4. Includes sources, build-scripts, and utility scripts.
https://github.com/UG4/ugcore
Other
36 stars 23 forks source link

Where does AssembledMultiGridCycle (the "GeometricMultiGrid" in lua) use its smoother? #16

Closed yxinli92 closed 5 years ago

yxinli92 commented 5 years ago

This question is about the ugbase C++ code. I think the AssembledMultiGridCycle class in mg_solver.h should use its smoother somewhere and call the smoother's step method (for example a smoother can be the GaussSeidel class in gauss_seidel.h), especially in AssembledMultiGridCycle's apply method, but I cannot find it.

When I search for references of m_spPreSmootherPrototype in Eclipse, I get the following result, image As we can see, the "step" method is not called anywhere. I also manually searched for the use of "step" method but could not find it.

In addition, I wrote the following code at the beginning GaussSeidelBase class's step method, image

and then run apps/calciumDynamics_app/wave/wave_creation_noLimex.lua, where a Newton solver uses a BiCGSTAB solver, which uses a GeometricMultiGrid as the preconditioner with a Gauss-Seidel smoother. There is no screen output for Gauss Seidel's step method. The following screen capture shows the last method called in each iteration is AssembledMultiGridCycle.apply method, rather than the Gauss-Seidel smoother.

image

Thanks a lot!

LogashenkoDL commented 5 years ago

Dear Yxinli92,

You are right: The multigrid method is really the class AssembledMultiGridCycle. This is the geometric multigrid method as described in e.g. Hackbusch "Iterative solution of large sparse systems of equations". This recurrence is implemented in AssembledMultiGridCycle::lmgc, cf. mg_solver_impl.hpp, line 2030. The pre- and postsmoothers are not called directly from this function but from presmooth_and_restriction and prolongation_and_postsmooth invoked in this function. For ex., presmooth_and_restriction (same file, line 1645) calls the presmoothers in line 1662.

The smoothers are not "hard-soldered" into the method. They are separate objects, but they are cloned for every grid level because every grid level has its own matrix. This means, if you use something like ILU, the L and U matrices have to be matrices have to be computed and saved per level. This is the reason why you do not use the objects specified in the main class of the GMG.

What you see are the convergence rates, and in this aspect it is hard to say what type of the output the smoothers should produce. Typically, you see any messages from them only in case of problems.

Best regards,

Dmitry

yxinli92 commented 5 years ago

Dear Dmitry, Thanks for your reply! I understand that each grid level has its own matrix. According to the paper "A massively parallel geometric multigrid solver on hierarchically distributed grids"

The paper says: On each grid level we iterate over all elements and the element-wise contributions to the process-local matrices are summed up, i.e., Ap = \sum{i=1,..,N}A_{p,i}. This leads to a global parallel matrix stored in an additive storage type and parallelization is taken into account by the horizontal Interfaces.

Is it possible that you can give more details about how you obtain each process-local matrix(any theory reference about how the elements are summed?) and how these process-local matrices are assembled? (For eg. are the process-local matrices only the part of the diagonal of the globla parallel matrix, if it is, is the other parts zero? If there is overlap between process-local matrices, how the overlap is dealt with?) If possible, can you give me some references about these procedures?

Thanks a lot! Best regards.

Xinli

bsumirak commented 5 years ago

Hello, Xinli -

the process-local matrices are assembled according to some discretization scheme, typically some form of finite element or finite volume discretization. This is done in an iteration over the elements present in each subdomain. Only process-local couplings are computed. No communication is involved. But please note that, due to the existing subdomain boundary overlap and the fact that each element is part of exactly one subdomain, the sum of the process-local matrices is exactly the matrix of the non-parallel case. This is what we call "additive" storage.

LogashenkoDL commented 5 years ago

Dear Yxinli92,

yes, what Bsumirak writes is mostly true. I can possibly provide more details.

  1. The fine grid matrix is assemble before the call of the GMG method (for example in the Newton method for non-linear problems, if GMG is called from the Newton solver). It is done elementwise as Bsumirak writes. The global cycle over all the elements is performed by the s.c. "domain discretization", cf. ugcore/ugbase/lib_disc/spatial_disc/domain_disc.h and the associated files in this directory. This class only manages the couplings and sums up the local matrices in the elements. The local matrices are assemble by "element discretizations" implemented for particular problems, like in the ConvectionDiffusion plugin. Whether it is FE or FE or what else is really implemented in the element discretizations, and these do not depend on the parallelization.

  2. For the matrices on the coarser grid levels, there are two possibilities. Typically, you assemble them by the same procedure as the fine grid matrix. For this, the GMG object gets a reference to the discretization. However, you can also use the Galerkin rAp formula. To switch on this option, use the AssembledMultiGridCycle::set_rap (true) call. (By default, this option is off.) However, both options are local.

To your question about the parallelization: ug4 supports the overlap for the grid elements (grid vertices, edges and faces - only in 3d) at the interface between the processes. This is done the master-ghost ideology: One of the copies is the master, the other ones are ghosts. It is used for the communication, but not during the assembling: In the assembling phase, you perform locally as if the ghost vertices would be the normal vertices. In particular, what you get for the vertex-centered discretizations is that the diagonal entries of the matrix should be the sums of the assemble diagonal entries corresponding to the master and the ghost nodes. (This concerns also some off-diagonal entries.) This method of the representation of the matrix (and also of some grid functions, like the defect) is called "additive".

But this is what you get directly after the assembling. There are two other representations in ug4: consistent and unique. In the former ones, the same copies are kept in the master and the ghosts, in the latter one - only the master keeps the value, and the ghosts are initialized with zeros. You may change the storage type. Cf. ugcore/ugbase/lib_algebra/parallelization/parallel_storage_type.h for more information.

Note that the implementation of the GMG cycle does not depend explicitly on the representation of the matrix and the grid functions in the parallel machine. This dependence is hidden in the smoothers, the transfer operators and the coarse grid solver. It is typically (or normally) assumed that the matrix and the right-hand side are in the additive representation. The solution (or the correction) is then in the consistent representation. Note furthermore, that the GMG is only a "preconditioner", i.e. the representation of the grid functions can be in particular important in the enclosing iterative schemes (like CG) when coputing the scalar products.

Different smoothers can use internally other representations, and this can depend in particular on their settings. It can be namely important to have the complete diagonal coefficient in the matrix in the particular processes.

For more information about this type of the parallelization of the multigrid method, I would refere to C. Wieners, Parallel linear algebra and the application to multigrid methods. (You can find it in the Internet.) It does not describe the current version of ug, but it is the similar ideology.

Hopefully, you will find this description helpful.

Best regards,

Dmitry

LogashenkoDL commented 5 years ago

Dear People,

If you have further questions, please, open a new issue.

Best,

Dmitry