Open briochemc opened 1 year ago
Hi Benoit,
The issue with sparse matrices is that most of the work in factorization is the graph theory you have to do. Low precision computations don't buy you much in the sparse case. Storing the factorization in low precision is a good deal (sometimes) but you'll have to hack the sparse factorization to do that.
There's some literature on the sparse case, but those papers don't really stress test it.
Right now what I have works in the dense case. If the high precision matrix is double and you factor in single, then
using MultiPrecisionArrays A=rand(5000,5000) b=rand(5000) MPA=MPArray(A) MPF=mplu!(A) x=MPF\b
is about twice as fast as
LUF=lu!(A) xx=LUF\b
See for yourself.
If low precision is Float16, things get very weird. It'll take me some time to document that.
— Tim
On Tue, Aug 15, 2023 at 6:34 PM Benoit Pasquier @.***> wrote:
This looks fantastic! Will it work for large systems with sparse matrices linear solvers too?
— Reply to this email directly, view it on GitHub https://github.com/ctkelley/MultiPrecisionArrays.jl/issues/4, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACOEX66VCHOGSHEHG7CBKBLXVP2RDANCNFSM6AAAAAA3RVOBNA . You are receiving this because you are subscribed to this thread.Message ID: @.***>
-- C. T. Kelley Department of Mathematics, Box 8205 SAS Hall 2311 Stinson Drive North Carolina State University Raleigh, NC 27695-8205 (919) 515-7163, (919) 513-7336 (FAX) @.*** https://ctk.math.ncsu.edu
Error:
try MPF=mplu!(MPA)
On Tue, Aug 15, 2023 at 6:43 PM C. T. Kelley @.***> wrote:
Hi Benoit,
The issue with sparse matrices is that most of the work in factorization is the graph theory you have to do. Low precision computations don't buy you much in the sparse case. Storing the factorization in low precision is a good deal (sometimes) but you'll have to hack the sparse factorization to do that.
There's some literature on the sparse case, but those papers don't really stress test it.
Right now what I have works in the dense case. If the high precision matrix is double and you factor in single, then
using MultiPrecisionArrays A=rand(5000,5000) b=rand(5000) MPA=MPArray(A) MPF=mplu!(A) x=MPF\b
is about twice as fast as
LUF=lu!(A) xx=LUF\b
See for yourself.
If low precision is Float16, things get very weird. It'll take me some time to document that.
— Tim
On Tue, Aug 15, 2023 at 6:34 PM Benoit Pasquier @.***> wrote:
This looks fantastic! Will it work for large systems with sparse matrices linear solvers too?
— Reply to this email directly, view it on GitHub https://github.com/ctkelley/MultiPrecisionArrays.jl/issues/4, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACOEX66VCHOGSHEHG7CBKBLXVP2RDANCNFSM6AAAAAA3RVOBNA . You are receiving this because you are subscribed to this thread.Message ID: @.***>
-- C. T. Kelley Department of Mathematics, Box 8205 SAS Hall 2311 Stinson Drive North Carolina State University Raleigh, NC 27695-8205 (919) 515-7163, (919) 513-7336 (FAX) @.*** https://ctk.math.ncsu.edu
-- C. T. Kelley Department of Mathematics, Box 8205 SAS Hall 2311 Stinson Drive North Carolina State University Raleigh, NC 27695-8205 (919) 515-7163, (919) 513-7336 (FAX) @.*** https://ctk.math.ncsu.edu
I'm not closing this issue in the hope that you can come up with a cool idea for the sparse case.
The API for this mess is in Heisenberg mode.
I asked because I have found that in some cases my (sparse) factorizations temporarily require huge amounts of memory (e.g., 10× more than the memory required to hold the Jacobian and its factors) and so I was thinking that maybe that could be helpful there :)
I'm not closing this issue in the hope that you can come up with a cool idea for the sparse case.
It's very kind of you to think I could 😅 but there are other people better equipped! Maybe @Wimmerer is interested in this?
Just registered the package.
This looks fantastic! Will it work for large systems with sparse matrices linear solvers too?