awslabs / palace

3D finite element solver for computational electromagnetics
https://awslabs.github.io/palace/dev
Apache License 2.0
224 stars 50 forks source link

Add `"EstimatorMG"` option for estimation mass matrix linear solve #220

Closed sebastiangrimberg closed 3 months ago

sebastiangrimberg commented 3 months ago

For electrostatic problems, PR https://github.com/awslabs/palace/pull/209 made the change from a global vector H1 project to a global RT-space projection, which increased the time spent in the linear solve by a factor of 5 or more. This PR is the result of some experimentation with how to speed up the linear solve associated with error estimation.

The current default is still to use Jacobi-preconditioned CG, but now a runtime option can switch this back to AMG (with matrix-free p-MG when order > 1) for problems where the increased memory overhead is worth the improved convergence rate.

sebastiangrimberg commented 3 months ago

Unfortunately, using MG uses more memory and in all the cases I've tested is actually slower than Jacobi. Hence the desire to keep the default as Jacobi. Probably for problems where Jacobi does not converge in many iterations there would be a speed benefit to MG but I've got to test that. It could be that we need to just increase the default maximum number of CG iterations, or maybe we even automatically fall back to MG preconditioning if the solve hits the maximum. I'll play around with this on a model where the Jacobi solve doesn't converge in 1000 its.

sebastiangrimberg commented 3 months ago

Update here: Even for the blog post example, with p = 3 on a tetrahedral mesh, the error estimation linear solve takes ~200 iterations to convergence in 3.1 s with Jacobi-PCG, vs. 70 iterations in 5.3 s with MG-PCG (5x cost per iteration for 3-level p-MG + AMG coarse solve, vs. Jacobi seems reasonable). There might be more to investigate as to why the iteration count is still 70 vs. something much smaller, but playing around with the fine-level p-multigrid smoothing settings does indeed drop down the iterations at the expense of more cost per iteration.

In all cases Jacobi is the fastest. This also matches what MFEM uses in their time-dependent Maxwell solver (explicit time stepping). I'm inclined based on this to just increase the maximum number of iterations to 5k or 10k so users are not met with annoying error messages. The iterations are (very) cheap. Thoughts?

sebastiangrimberg commented 3 months ago

Some final numbers here before merging: For an eigenmode example with `"Order": 6", tetrahedral mesh, the tradeoff between using Jacobi and multigrid preconditioning for the estimator linear solve appears about even. Roughly 2500 iterations for the former vs. 600 for the latter, taking the same amount of time. The number of iterations for the latter can be futher decreased by playing around with the multigrid smoother and AMG settings, but in general the total time is constant. So, we will leave the default as Jacobi, with increased maximum iteration count, for now as planned.