Ferrite-FEM / Ferrite.jl

Finite element toolbox for Julia
https://ferrite-fem.github.io
Other
328 stars 85 forks source link

domain decomposition methods as preconditioners #877

Open jfdev001 opened 5 months ago

jfdev001 commented 5 months ago

As far as I can tell, preconditioning using domain decomposition methods is not something that seems to be supported. Is this something that is within the scope of Ferrite.jl? Of course there are C implementations that are available through PETSc, but I think a pure Julia implementation would be beneficial. As an example, perhaps a (restricted) additive Schwarz method would be worth contributing? See PETSc: PCASM for details.

termi-official commented 5 months ago

Indeed, we don't have examples on how to use DD methods, but I think supporting them should not be too difficult. This is definitely within scope of Ferrite.jl. If you want to use an algebraic linear RAS preconditioner you can use https://jso.dev/KrylovPreconditioners.jl/dev/reference/#KrylovPreconditioners.BlockJacobiPreconditioner out of the box right now already. For nonlinear systems my plan was to find some time in the not so far future to tackle https://github.com/SciML/NonlinearSolve.jl/issues/351, but I am unfortunately busy with other tasks right now.

If you want to take a look into an geometric approach to DD (either as preconditioner or solver) I am happy to provide some guidance so we can think about adding an example (what do you think @fredrikekre ?). I think it might also be worth to update FerriteDistributed.jl to master and to take a look on how to implement such methods there, because parallel computing is usually where these solvers shine. However, be aware that this is definitely a bigger project, as FerriteDistributed.jl is quite a bit behind master in terms of features. Furthermore, I think these solvers can be helpful once GPU supports land, because they have similar issues to distributed computing. But for this one I also need a bit more time to explore different designs for Ferrite ( xref https://github.com/Ferrite-FEM/Ferrite.jl/pull/766 for the corresponding PR).

For completeness I want to mention that, as an alternative to Schwarz methods, there is also FETI as paradigm.

jfdev001 commented 5 months ago

I would like to add an example for a geometric approach to DD, though admittedly I'm still quite new to domain decomposition.~ In trying to get an idea for DD methods, I have found several libraries like HPDDM, PETSc, and FEMPAR that implement many of these methods (Schwarz methods, BDDC, FETI, etc.), but I've generally found most of the textbooks on this subject quite challenging to translate to code on my own. So any suggestions on where to start would be appreciated in terms of implementing a geometric DD example! If you think it would be worth adding, I could also add a heat equation example with using the out-of-the-box RAS preconditioner, though maybe this is too trivial since I think it would just be assemble linear system > call Krylov solver with RAS preconditioner > visualize results and illustrate difference in iterations until convergence.

For others who might be interested in contributing, I have listed some books that I've gone through to varying degrees of thoroughness and might be worth reading to get a good general idea for DD:

[1] : Smith, B. F., Bjorstad, P.E., Gropp, W.D. (1996). “Domain Decomposition Parallel Multilevel Methods for Elliptic Partial Differential Equations.” Cambridge University Press.

[2] : Dolean, V., Pierre, J., Nataf, F. (2015). “An Introduction to Domain Decomposition Methods: Algorithms, Theory, and Parallel Implementation”. (SIAM).

[3] : Bruaset, A. M. and Tveito, A. (2006). Chapter 4: "Domain Decomposition Techniques" in “Numerical Solution of Partial Differential Equations on Parallel Computers.” Springer.

Lastly, at least in 2017, Wolfgang Bangerth answered this stack overflow question actually advising against the use of DD methods. Maybe this suggests we should focus more on multigrid methods? I'm not certain, but figured I'd mention it.

Update: Unfortunately I am quite busy with my thesis right now, so cannot add these examples (geometric DD or RAS preconditioner). I am adding this update so that other potential contributors would be aware of this.

KnutAM commented 5 months ago

I could also add a heat equation example with using the out-of-the-box RAS preconditioner, though maybe this is too trivial since I think it would just be assemble linear system > call Krylov solver with RAS preconditioner > visualize results and illustrate difference in iterations until convergence.

On the master branch, the examples are split into tutorials and how-to. IMO this could be a valuable how-to!

termi-official commented 5 months ago

I would like to add an example for a geometric approach to DD, though admittedly I'm still quite new to domain decomposition. In trying to get an idea for DD methods, I have found several libraries like HPDDM, PETSc, and FEMPAR that implement many of these methods (Schwarz methods, BDDC, FETI, etc.), but I've generally found most of the textbooks on this subject quite challenging to translate to code on my own.

Probably one more notable examples from a German research consortium is FROSch (https://shylu-frosch.github.io/).

For algebraic DD preconditions things are indeed a bit more tricky to understand and I am not an expert myself in this topic. This comes really down to reading the papers and trying out the idea on simple toy problems to get a better understanding for what exactly is happening, as for any numerical scheme. For the geometric versions you provide the subdomains simply as direct input to your preconditioner and apply the scheme. Understanding the schemes themselves can be a challenge on its own.

So any suggestions on where to start would be appreciated in terms of implementing a geometric DD example!

I think the simplest starting point would be to think about some toy problem where you know how to decompose it efficiently (e.g. a square decomposed into an overlapping Cartesian system). Then you can explore different routes. The simplest might be to define a DofHandler per subdomain and to assemble the matrix for each subdomain to get your subproblems. For this approach you just need some helper function to project the solutions between the problem boundaries.

If you think it would be worth adding, I could also add a heat equation example with using the out-of-the-box RAS preconditioner, though maybe this is too trivial since I think it would just be assemble linear system > call Krylov solver with RAS preconditioner > visualize results and illustrate difference in iterations until convergence.

I like the idea, but I think this might be more something for a blog post or presentation to see what is going on. I remember different presentations at conferences where people have presented the evolution of the error over the iterations which was quite interesting. I am not against the idea of including this in e.g. the gallery or as a how-to tho.

Lastly, at least in 2017, Wolfgang Bangerth answered this stack overflow question actually advising against the use of DD methods. Maybe this suggests we should focus more on multigrid methods? I'm not certain, but figured I'd mention it.

I would lean towards agreeing with Wolfgang Bangerth on this for linear problems and I highly favor AMG over DD methods myself. However, DD methods are still an active topic in numerical research and if someone has interest in them for the sake of research, then I would not try to hold them back.

For nonlinear problems the picture is not so clear.

jfdev001 commented 5 months ago

For completeness I want to mention that, as an alternative to Schwarz methods, there is also FETI as paradigm.

Another relevant package I found is AMfeti, which implements FETI in Python. The docs provide a short description of the theory and the relevant parallel communication paradigms. This might be useful for a Julia translation.

jfdev001 commented 2 months ago

I would lean towards agreeing with Wolfgang Bangerth on this for linear problems and I highly favor AMG over DD methods myself. However, DD methods are still an active topic in numerical research and if someone has interest in them for the sake of research, then I would not try to hold them back.

PartitionedArrays.jl has a sub-package called PartitionedSolvers.jl that implements parallel algebraic multigrid. Possibly worth mentioning since the incorporation of PartitionedArrays.jl seems relevant (e.g., https://github.com/Ferrite-FEM/FerriteDistributed.jl/issues/35#issue-2094560483).