pydata / sparse

Sparse multi-dimensional arrays for the PyData ecosystem
https://sparse.pydata.org
BSD 3-Clause "New" or "Revised" License
581 stars 123 forks source link

Upgrade `finch-tensor` #702

Closed mtsokol closed 1 month ago

mtsokol commented 1 month ago

Hi @hameerabbasi,

Once we have new Finch.jl and finch-tensor releases in some time we can merge it, as we solved:

mtsokol commented 1 month ago

So I think matmul_example.py is written exactly as the user would write it and it shows speedup compared to Numba.

hameerabbasi commented 1 month ago

Thanks, @mtsokol. Waiting on the release and CI before I review.

hameerabbasi commented 1 month ago

Ping me when the release is up, I'll approve.

mtsokol commented 1 month ago

@hameerabbasi The PR is ready (except for hanging Finch Array API job at 95%).

willow-ahrens commented 1 month ago

Is the benchmark any faster with lazy indexing?

mtsokol commented 1 month ago

Is the benchmark any faster with lazy indexing?

@willow-ahrens I would say matmul with the lazy notation is slightly faster but the notation a @ b is much closer to what user would write compared to lazy indexing form:

SIZE = 100000 x 100000
DENSITY = 0.00001
FORMAT = csr
ITERS = 3

######
# Finch a @ b
Finch
Took 0.040337721506754555 s.

Numba
Took 2.880397001902262 s.

SciPy
Took 0.0067259470621744795 s.
######
# Finch lazy indexing
Finch
Took 0.05138166745503744 s.

Numba
Took 2.861244281133016 s.

SciPy
Took 0.006536006927490234 s.
######