Open learning-chip opened 2 years ago
Yes, that would make sense here.
WIP: see https://github.com/JuliaPackaging/Yggdrasil/pull/7896
AMGCL is heavily templated. The best path forward would be a complete re-implementation in Julia which is over my time budget in the moment. That would be a great project for anyone wanting to learn precondtioning, though. In that case, the iterative solver part probably could be taken from Krylov.jl .
So for now any calls from julia need to go via a c library with pre-instantiated code, this is the above mentioned PR. Still quite bit flexibility though due to the "runtime interface". Makes scalar and point block variants available for OpenMP in the moment, but misses e.g. saddle point stuff and CUDA.
Will first register AMGCLWrap.jl, then make a PR over here.
Ok, AMGCLWrap.jl is there.
It appears to be quite straightforward to produce an AMGMCLWrapLinearSolveExt
via the LinearSolveFunction
API.
Anything speaking for an LinearSolveAMGCLWrapExt
instead ?
I don't care either way. However it's done, I think it needs a note in the LinearSolve.jl docs and with that the details of how the wrapper is done doesn't matter to the end user.
Ok so will use the first version because it is easy to maintain for me. After getting some feeling how stable it works in some of my projects I will make a PR to the docs.
https://github.com/ddemidov/amgcl
Has OpenMP/MPI/CUDA implementations, similar to Hypre but much more lightweight. Already has a Python interface (see test)