This issue is meant to track progress of implementing Array API standard for finch-tensor.
I thought that we could try adding short notes to bullet-points, saying which Finch.jl functions should be called to implement given entry. I think we already had some ideas during one of our first calls.
Hi @willow-ahrens @hameerabbasi,
This issue is meant to track progress of implementing Array API standard for
finch-tensor
.I thought that we could try adding short notes to bullet-points, saying which
Finch.jl
functions should be called to implement given entry. I think we already had some ideas during one of our first calls.Array API: https://data-apis.org/array-api/latest/index.html
Backlog
main namespace
astype
- https://github.com/willow-ahrens/finch-tensor/pull/15 - eageradd
,multiply
,cos
, ...) - https://github.com/willow-ahrens/finch-tensor/pull/17 (partially...)xp.prod
,xp.sum
) -jl.sum
andjl.prod
, also justjl.reduce
- https://github.com/willow-ahrens/finch-tensor/pull/17matmul
- implemented withfinch.tensordot
for non-stacked input. Should be rewritten withjl.mul
/ Finch einsum.tensordot
-finch.tensordot
- https://github.com/willow-ahrens/finch-tensor/pull/22where
-jl.broadcast(jl.ifelse, cond, a, b)
- https://github.com/willow-ahrens/finch-tensor/pull/30argmin
/argmax
-jl.argmin
(bug willow if this isn't implemented already) - eager for nowtake
-jl.getindex
eager for nownonzero
- this is an eager function, but it is implemented asffindnz(arr)
- https://github.com/willow-ahrens/finch-tensor/pull/30asarray
,ones
,full
,full_like
, ... -finch.Tensor
constructor, as well asjl.copyto!(arr, jl.broadcasted(Scalar(1))
, as well as changing the default of the tensor withTensor(Dense(Element(1.0)))
. We may need to distinguish some of these. https://github.com/willow-ahrens/finch-tensor/pull/28, https://github.com/willow-ahrens/finch-tensor/pull/32max
,mean
,min
,std
,var
unique_all
,unique_counts
,unique_inverse
,unique_values
- eagerall
,any
concat
- eager for nowexpand_dims
- lazyflip
-eager for nowreshape
- eager for nowroll
- eager for nowsqueeze
- lazystack
- eager for nowargsort
/sort
- eagerbroadcast_arrays
- eager for nowbroadcast_to
- eager for nowcan_cast
/finfo
/iinfo
/result_type
bitwise_and
/bitwise_left_shift
/bitwise_invert
/bitwise_or
/bitwise_right_shift
/bitwise_xor
linalg
namespace(I copied those from the benchmark suite. If something turns out to be unfeasible we can drop it.)
linalg.vecdot
-finch.tensordot
linalg.vector_norm
-finch.norm
linalg.trace
- eagerlinalg.tensordot
- implemented in the main namespace. Just needs an aliaslinalg.outer
linalg.cross
- eager for nowlinalg.matrix_transpose
- lazylinalg.matrix_power
- eager (call matmul on sparse matrix until it gets too dense)linalg.matrix_norm
- fornuc
or2
, call external library. Forfro
,inf
,1
,0
,-1
,-inf
, calljl.norm
.xp.linalg.diagonal
-finch.tensordot(finch.diagmask(), mtx)
xp.linalg.cholesky
- call CHOLMOD or somethingxp.linalg.det
- call EIGEN or somethingxp.linalg.eigh
- call external libraryxp.linalg.eigvalsh
- call external libraryxp.linalg.inv
- call external library -scipy.sparse.linalg.inv
xp.linalg.matrix_rank
- call external libraryxp.linalg.pinv
- call external libraryTensor
methods and attributesTensor.to_device()
-finch.moveto
miscellaneous