Closed BenWibking closed 1 year ago
I at least am also interested to see GMG actually implemented in parthenon. That would be the first step.
@lroberts36 has a 1D implementation working, based on my understanding from last week's call.
Ah that was fast. Wasn't aware that much progress had been made.
No, it is not implemented in Parthenon yet. I just have a test code working that uses a similar block structure to Parthenon, but including meshblocks associated with interior nodes of the refinement tree.
It seems like the block local solves we were employing in RIOT as a preconditioner can be used as a smoother in GMG to substantially improve the convergence rate and only going to the a grid that has a size equal to that of the root block. Given that this approach to GMG seems fairly promising based on the 1D tests, I am thinking it is probably worth the effort to plumb it into Parthenon. It may be possible to do this by just having separate block lists for each MG level, but I haven't completely thought through all of the necessary changes.
Is the main infrastructure issue that Parthenon doesn't have placeholder/unallocated MeshBlocks for the interior nodes?
Yes, I think the lift here will be adding blocks for the interior nodes. It is straightforward to associate blocks with all nodes in the tree, but it will require some work to set up boundary communication between interior blocks and link blocks for prolongation/restriction. Also it requires adding infrastructure so that task lists can ask for MeshData
associated with a particular GMG level.
Maybe we should change the title of this issue to Add Geometric Multigrid Solver?
Were you using a sparse direct solver like cuSPARSE for the block local solves?
No, I was just working in 1D and looking at tridiagonal matrices, so I could use a simple tridiagonal solver.
In 3D, we have been preconditioning radiation diffusion with dimensionally split diffusion operators which seems to work fairly well (and obviously scales nicely compared to other solvers). My plan was to mostly think of using MG as a preconditioner for krylov type methods, so that I could always define a system that is the product of tridiagonal matrices that could be directly solved at the block level.
For other applications, I agree it would be nice to have a more general block level solver.
That could also be useful for gravity Poisson solves (my primary use for multigrid). Just to clarify, in 3D with a Laplacian operator, it's then possible to do the block solve with repeated tridiagonal solves? I can't immediately work out the math on this.
The repeated 1D solves are not exact (I think it is morally equivalent to a dimensionally split update for hydro vs unsplit), but in practice it is close enough to the actual operator to be a good preconditioner.
This seems like some potentially interesting prior work on the convergence properties of block smoothers for GMG: https://doi.org/10.1016/j.procs.2012.04.002, although they appear to be using it as solver rather than preconditioner.
In principle, one can use less-than-double for parts of the multigrid solve, or can use less-than-double for the initial guess and then finish the iteration in double.
There is a significant literature about doing this for general sparse matrix solvers, but not as much for multigrid. For geometric multigrid, this is one of the few references I've found: https://arxiv.org/abs/2007.07539.
For AMG, there is an implementation in HYPRE using defect-correction methods ("Defect correction-based mixed precision AMG solvers"): https://www.exascaleproject.org/wp-content/uploads/2022/06/2022_ECP_BoF_1.pdf. It's not clear to me whether this is applicable to GMG.
I'd like to note this here for future discussion, especially if anyone has tried mixed-precision before.