Closed amontoison closed 1 year ago
Base: 84.98% // Head: 79.26% // Decreases project coverage by -5.72%
:warning:
Coverage data is based on head (
14c0f41
) compared to base (c81dc76
). Patch coverage: 60.49% of modified lines in pull request are covered.
:umbrella: View full report at Codecov.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
Hi, thanks for the PR. Could you indicate in the documentation how to use these routines, and add tests if possible? I think that the old mul!
implementation is still tested.
Hi, thanks for the PR. Could you indicate in the documentation how to use these routines, and add tests if possible? I think that the old
mul!
implementation is still tested.
Hi @geoffroyleconte, I didn't finished the PR but you don't need to update something to use the new routines.
The mul!
will dispatch to multithreaded SparseBLAS MKL routines when it can.
I will add new tests and benchmarks but the current tests should use the new routines.
It will be more efficient on Intel CPUs but we could still have a significant speed-up on AMD / M1 processors because the routines are multithreaded.
Hi, thanks for the PR. Could you indicate in the documentation how to use these routines, and add tests if possible? I think that the old
mul!
implementation is still tested.Hi @geoffroyleconte, I didn't finished the PR but you don't need to update something to use the new routines. The
mul!
will dispatch to multithreaded SparseBLAS MKL routines when it can. I will add new tests and benchmarks but the current tests should use the new routines.It will be more efficient on Intel CPUs but we could still have a significant speed-up on AMD / M1 processors because the routines are multithreaded.
Perfect thanks! Yes I think most of the tests use mul!
instead of *
so that might be why there are some coverage issues
I will add new tests and benchmarks but the current tests should use the new routines.
They should or they do?
I will add new tests and benchmarks but the current tests should use the new routines.
They should or they do?
They do, I checked with @code_warntype
.
Time in seconds to perform 1000 matrix-vector products with matrices from the SuiteSparseMatrixCollection. It seems that the MKL routine for COO matrices is not multithreaded :(
It's not much faster. Maybe SparseCOO matrix-vector products cannot be multi-threaded? Even in Julia I do not know how we could use the @threads
macro because we do not know the number of nonzeros in each column.
Are the benchmark performed on matrices with the Symmetric
wrapper? I'm not sure that this function is the fastest possible with the t
call : https://github.com/JuliaSmoothOptimizers/SparseMatricesCOO.jl/blob/c81dc762eb39f3a60b58163edd72eafc20558aaf/src/coo_linalg.jl#L66-L75
It's not much faster. Maybe SparseCOO matrix-vector products cannot be multi-threaded? Even in Julia I do not know how we could use the
@threads
macro because we do not know the number of nonzeros in each column. Are the benchmark performed on matrices with theSymmetric
wrapper? I'm not sure that this function is the fastest possible with thet
call :
No, I just tested with generic COO matrices. I will compare with Symmetric matrices. I think that It can be multi-threaded because Intel was able to implement multi-threaded CSC - vector products. It's only "easy" for CSR - vector products.
We have the same performances for Symmetric matrices.
Not exactly in favor of MKL. What's going on here?
I'm not sure that this function is the fastest possible with the t call :
A little meta-programming might settle that.
Not exactly in favor of MKL. What's going on here?
I interfaced deprecated routines (coo_mv).
I should try the new and generic mkl_sparse_mv
.
I will need Clang.jl to generate the interface.
I interfaced the new routines, the MKL version is slower than our Julia version.
who needs the MKL? 😃
I close the PR. MKL is not relevant for COO format.
I just added a dummy copy of our COO sparse matrix in MKLSparse.jl
to easily test if the results are better in the future:
https://github.com/JuliaSparse/MKLSparse.jl/pull/33/files#diff-6241561dad231b63a9c2e46e7ff91ecfb885c834a98b621364ab11d323086af7R1-R19
@AntoninKns