Closed mtsokol closed 1 month ago
So I think matmul_example.py
is written exactly as the user would write it and it shows speedup compared to Numba.
Thanks, @mtsokol. Waiting on the release and CI before I review.
Ping me when the release is up, I'll approve.
@hameerabbasi The PR is ready (except for hanging Finch Array API job at 95%).
Is the benchmark any faster with lazy indexing?
Is the benchmark any faster with lazy indexing?
@willow-ahrens I would say matmul with the lazy notation is slightly faster but the notation a @ b
is much closer to what user would write compared to lazy indexing form:
SIZE = 100000 x 100000
DENSITY = 0.00001
FORMAT = csr
ITERS = 3
######
# Finch a @ b
Finch
Took 0.040337721506754555 s.
Numba
Took 2.880397001902262 s.
SciPy
Took 0.0067259470621744795 s.
######
# Finch lazy indexing
Finch
Took 0.05138166745503744 s.
Numba
Took 2.861244281133016 s.
SciPy
Took 0.006536006927490234 s.
######
Hi @hameerabbasi,
Once we have new
Finch.jl
andfinch-tensor
releases in some time we can merge it, as we solved:floor_divide
return dtype mismatchmatmul_example.py
we can replace lazy indexing with actual@
operator as for the benchmarked case (on my machine) Finch is x40 times faster than Numba (and x7 times slower than SciPy)