Open bvdmitri opened 3 years ago
If you wanted it to be efficient, this would need a bespoke implementation (the one in the package leverages BLAS/LAPACK internals for performance on dynamic arrays). However, one could easily take the output and convert back to an SMatrix
; such functionality could be added under @require
. Which do you prefer?
If you're tempted by the efficient implementation (and it could easily be a factor of 100 or more for a 3x3 array), note all the complexity in this package is basically about efficient computation on large dynamic arrays and leveraging BLAS internals; the core idea is really pretty simple, and that's all you'd need for SMatrix
. IIRC (it's been a while...) the entire algorithm is basically "do cholesky decomposition, taking the absolute value of the diagonals before computing the square root." So you could probably copy the implementation of cholesky
in StaticArrays, modify it, and place it here. At that point it would be worth making StaticArrays
a required dependency of this package, and we could even precompile some special cases (e.g., 2x2 and 3x3 of Float64
matrices).
Which do you prefer?
I don't know 😅, its probably up to you to decide if you want to support StaticArrays.jl in your package. For us it is not that important since we can always fallback to a built-in cholesky (without extra correction) or as you mentioned we can always convert the result back to SMatrix
on our side (which doesn't really make a lot of sense from performance point of view, but maybe StaticArrays.jl still can give some performance in other parts of our application).
As I understand the main problem here is that StaticArrays are not mutable, so it is not possible to directly use it a storage with similar
for example?
I'm happy to make it a dependency as long as there is "good" implementation. If it's just a "convert to Matrix, compute factorization, and convert back to SMatrix" then I'd go with the @require
solution.
As I understand the main problem here is that StaticArrays are not mutable, so it is not possible to directly use it a storage with similar for example?
Yes. But since StaticArrays already has a cholesky
implementation, I think you could basically copy/paste it here, tweak it slightly, and add a ::Type{Positive{T}}
argument. Add tests and submit PR, done.
For you, here's the advantage:
julia> using StaticArrays, LinearAlgebra, BenchmarkTools
julia> Ms = @SMatrix [2 1; 1 2]; Md = [2 1; 1 2];
julia> @btime cholesky(Hermitian($Md))
255.874 ns (3 allocations: 160 bytes)
Cholesky{Float64, Matrix{Float64}}
U factor:
2×2 UpperTriangular{Float64, Matrix{Float64}}:
1.41421 0.707107
⋅ 1.22474
julia> @btime cholesky(Hermitian($Ms))
7.564 ns (0 allocations: 0 bytes)
Cholesky{Float64, SMatrix{2, 2, Float64, 4}}
U factor:
2×2 UpperTriangular{Float64, SMatrix{2, 2, Float64, 4}}:
1.41421 0.707107
⋅ 1.22474
A fast implementation of cholesky(Positive, Ms)
should have a similar advantage.
Ah, nice! I can work on PR next week.
Hey! I've spent some time to check StaticArrays
implementation, it looks quite complex and I didn't understand how to integrate it with this package. I'll leave this issue open, it is not urgent, as it also turned out that PositiveFactorisations.jl
is not the only one place in our application that does not play well with StaticArrays
, we need to fix a lot of other places in our code as well.
Hey, thanks for this great library! We recently encountered that
PositiveFactorizations.jl
loses type information in case if input is a StaticArray.Not sure if it is easy to fix, but would be great to make it work with StaticArrays as well.