SciML / LinearSolve.jl

LinearSolve.jl: High-Performance Unified Interface for Linear Solvers in Julia. Easily switch between factorization and Krylov methods, add preconditioners, and all in one interface.
https://docs.sciml.ai/LinearSolve/stable/
Other
245 stars 52 forks source link

Tutorial BlockDiagonals #203

Open ArnoStrouwen opened 1 year ago

ArnoStrouwen commented 1 year ago

Does LinearSolve.jl specialize automatically with BlockDiagonals.jl? If so, it might be worth a tutorial to showcase this functionality.

ChrisRackauckas commented 1 year ago

I don't think it does automatically specialize on that case. It's worth writing the tutorial and making sure it works.

ArnoStrouwen commented 1 year ago

I am trying to write a tutorial for BandedMatrix as it states here that it should be compatible: https://docs.sciml.ai/LinearSolve/stable/solvers/solvers/#Base.LinearAlgebra but I can't get it to work.

ChrisRackauckas commented 1 year ago

What's the MWE? It should just work by falling back to the GeneralFactorization?

ArnoStrouwen commented 1 year ago
using LinearSolve
using BandedMatrices
A = BandedMatrix(Ones(5,5),(0,1))
b = ones(5)
prob = LinearProblem(A,b)
sol = solve(prob,LUFactorization()) #error
sol = solve(prob)
sol.alg # KrylovJL
ChrisRackauckas commented 1 year ago

Yeah that's not surprising.

using LinearSolve
using BandedMatrices
A = BandedMatrix(Ones(5,5),(0,1))
b = ones(5)
prob = LinearProblem(A,b)
sol = solve(prob,GenericFactorization()) #error
sol = solve(prob)
sol.alg # KrylovJL

Both the generic factorization and Krylov work, and it picks one automatically that works. This should improve once you don't need to manually add ArrayInterfaceBandedMatrices via 1.9 weak deps too.

ArnoStrouwen commented 1 year ago

But the documentation says:

LUFactorization(pivot=LinearAlgebra.RowMaximum()): Julia's built in lu.

  • On dense matrices this uses the current BLAS implementation of the user's computer which by default is OpenBLAS but will use MKL if the user does using MKL in their system.
  • On sparse matrices this will use UMFPACK from SuiteSparse. Note that this will not cache the symbolic factorization.
  • On CuMatrix it will use a CUDA-accelerated LU from CuSolver.
  • On BandedMatrix and BlockBandedMatrix it will use a banded LU.

not that you have to go through the GenericFactorization interface. I think most people would be confused by the difference between: solve(prob,GenericFactorization(fact_alg=lu))and solve(prob,LUFactorization()).

ChrisRackauckas commented 1 year ago

It seems it's an issue with BandedMatrices.jl that lu isn't defined but factorize is, we should mention that in the docs.

avik-pal commented 6 months ago

For BlockDiagonals and BandedMatrices, I had added specializations. BandedLU is defined https://github.com/JuliaLinearAlgebra/BandedMatrices.jl/blob/master/src/banded/BandedLU.jl but I dont think ldiv! is defined.