Closed mipals closed 9 months ago
Hi,
The sparsity of the Schur complement matrix is very dependent on the structure of the blocks used to compute it, even if they are sparse themself, Some time ago I looked into sparse Schur complement matrix computation and all the solvers I looked into allocated a dense matrix for the result because in general you can't say that the result will be sparse. From the MUMPS documentation, I think I remember that a dense Schur complement matrix is always returned. But maybe verify in the MUMPS documentation.
Guillaume
I checked the documentation tonight (https://mumps-solver.org/doc/userguide_5.6.2.pdf) and it seems that the returned Schur complement is always dense (section 5.15).
Ahh, fair enough, for some reason I thought the whole point of doing the Schur complement trick was to have a sparse result. But I guess the unfortunate thing with sparse solves is that one can not guarantee sparsity 😢 (what is the application of the Schur trick then? Small sparse matrices?)
In my application $B$ is block-diagonal. As such it is possible to form $B^{-1}$ explicitly by inverting each block and then simply perform the product of the two sparse matrices i.e. $B^{-1}A$. However, I am not sure of the numerical stability of the explicit inversions. Do you have any other ideas?
Instead of inverting each block of $B$, simply compute a factorization for them. You’re not saying whether the blocks are dense or sparse; use the corresponding adequate factorization. That’s what would happen if you computed the inverse anyways, but with the added cost of performing a solve for each column of the inverse (which you don’t need). Once you have the factorization, it’s a simple matter of combining them to obtain $B^{-1} A$.
For the time being the blocks are either diagonal or dense (but with structure where the inverse is diagonal + low-rank).
That being said, I have tried the factorization approach for general blocks, which simply reduces to performing lots of B_i\A_i
($B_i$ is the square block, and $A_i$ rectangular and sparse). However, here I again have the issue that even when $A_i$ is sparse then Julia returns the solution (B_i\A_i
) as a dense matrix. In hindsight this makes sense as one can not, in general, guarantee that B_i\A_i
is sparse.
BasicLU.jl handles sparse right-hand sides.
You could use it to factorize each B_i
and call solve!
on each column of A_i
.
I only briefly looked into BasicLU.jl. For me it seemed like the sparse rhs could only be vectors and not matrices. I will take another look. Thanks for the inputs!
Yes, it could be only sparse vectors but you can call on each column of A_i
, which is easy when stored in CSC format.
On slack I got told about the SparseSparse.jl package. It seems to do the job if I ignore the block structure. But I was already doing that trying to use MUMPS.
Anyhow, thanks for the many great inputs!
Hi there,
I am trying to calculate the product $B^{-1}A$ using the Schur complement (both matrices are sparse). I've not been able to find any example, but got something to work after copying some code directly from
mumps_schur_complement(A::AbstractArray, x)
(I could not use it directly as the linemumps = Mumps(A)
seem to throw errors).My issue is now that the computed Schur complement (
S
) seem to be returned as a dense matrix even though it is extremely sparse. Is there some input variable that I am missing, or does it simply always return a dense matrix?Cheers, Mikkel