JuliaSparse / SparseArrays.jl

SparseArrays.jl is a Julia stdlib
https://sparsearrays.juliasparse.org/
Other
90 stars 52 forks source link

Make sparse linear algebra operations work on all special matrices #110

Open ViralBShah opened 2 years ago

ViralBShah commented 2 years ago

Just like LU in JuliaSparse/SparseArrays.jl#131, we also need these:

In addition, we should put promotion rules in place at the very least:

Also, need to make these all work for Symmetric, Hermitian, [etc.] (#13, JuliaSparse/SparseArrays.jl#125) (https://docs.julialang.org/en/v1/stdlib/LinearAlgebra/#Special-matrices)

Much of this essentially needs to have promotion rules in place to convert the matrix to a regular sparse matrix and then call the usual method. Over time, we can optimize certain paths.

Similar issue in the dense case that has a complete list of factorizations (not all of which may have sparse counterparts): https://github.com/JuliaLang/julia/issues/38293

rayegun commented 2 years ago

We don't necessarily have to be clever for a lot of these right? If there's no simple, extant way to specialize, just perform a copy? And then maybe write faster versions where requested?

fredrikekre commented 2 years ago

Isn't errors better? Then you could construct the correct (i.e. the transposed) matrix directly instead of thinking there is an efficient transpose solver?

ViralBShah commented 2 years ago

My thought was - the transpose is a small fraction of the time compared to the factorization, which also usually has fill-in and memory growth. So, if we don't promote, the user code will just have lots of copy statements all over. In some of these special cases, the matrices are also symmetric, avoiding the need for much to be done. It is true that there are cases where people may prefer avoiding these copies, and perhaps we can add these notes to the documentation?

I believe in many of these cases, the underlying solvers do support transpose solves, and we can optimize them over time.

Right, now all this is a bit all over the place. Some work, some have fallbacks, and the rest error, and it is inconsistent with the dense versions.

rayegun commented 2 years ago

This is also what LinearAlgebra does for dense anyway right? I see copy_oftype being done on lu(A) for Matrix (although in that case it's a byproduct of in-place lu!). I think matching dense here is better. I will make note of where copies occur in the docs.

On the other hand, at least for UMFPACK, Dr. Davis recommended we don't optimize by doing lu(A::Transpose{T, SparseMatrixCSC{T}}) = lu(parent(A))'. Because, while the solve results will be the same, the quality of the factorization itself can be different. So in some cases it may be better to keep the lu(A::Transpose) = lu(copy(A)) code path intact. Or in other words, support transpose solves by doing fact = lu(A); fact' / b. This should be documented of course, with the recommendation that users test both and determine which works better for them.