Up to 200x Faster Dot Products & Similarity Metrics — for Python, Rust, C, JS, and Swift, supporting f64, f32, f16 real & complex, i8, and bit vectors using SIMD for both AVX2, AVX-512, NEON, SVE, & SVE2 📐
In numpy/scipy, A @ B is much faster if A is sparse and turned into scipy sparse matrix. If both are sparse or only matrix A is sparse and B is dense. Both works.
Can SimSIMD improve such matrix multiplications.
Can you contribute to the implementation?
[ ] I can contribute
Is your feature request specific to a certain interface?
It applies to everything
Contact Details
No response
Is there an existing issue for this?
[X] I have searched the existing issues
Code of Conduct
[X] I agree to follow this project's Code of Conduct
Describe what you are looking for
In numpy/scipy,
A @ B
is much faster ifA
is sparse and turned into scipy sparse matrix. If both are sparse or only matrixA
is sparse andB
is dense. Both works.Can SimSIMD improve such matrix multiplications.
Can you contribute to the implementation?
Is your feature request specific to a certain interface?
It applies to everything
Contact Details
No response
Is there an existing issue for this?
Code of Conduct