This issue is meant to track progress of implementing Array API standard for finch-tensor.
I thought that we could try adding short notes to bullet-points, saying which Finch.jl functions should be called to implement given entry. I think we already had some ideas during one of our first calls.
Hi @willow-ahrens @hameerabbasi,
This issue is meant to track progress of implementing Array API standard for
finch-tensor
.I thought that we could try adding short notes to bullet-points, saying which
Finch.jl
functions should be called to implement given entry. I think we already had some ideas during one of our first calls.Array API: https://data-apis.org/array-api/latest/index.html
Backlog
main namespace
astype
- https://github.com/willow-ahrens/finch-tensor/pull/15add
,multiply
,cos
, ...) - https://github.com/willow-ahrens/finch-tensor/pull/17 (partially...)xp.prod
,xp.sum
) -jl.sum
andjl.prod
, also justjl.reduce
- https://github.com/willow-ahrens/finch-tensor/pull/17matmul
- implemented withfinch.tensordot
for non-stacked input. Should be rewritten withjl.mul
/ Finch einsum.tensordot
-finch.tensordot
- https://github.com/willow-ahrens/finch-tensor/pull/22where
-jl.broadcast(jl.ifelse, cond, a, b)
- https://github.com/willow-ahrens/finch-tensor/pull/30argmin
/argmax
-jl.argmin
(bug willow if this isn't implemented already)take
-jl.getindex
nonzero
- this is an eager function, but it is implemented asffindnz(arr)
- https://github.com/willow-ahrens/finch-tensor/pull/30asarray
,ones
,full
,full_like
, ... -finch.Tensor
constructor, as well asjl.copyto!(arr, jl.broadcasted(Scalar(1))
, as well as changing the default of the tensor withTensor(Dense(Element(1.0)))
. We may need to distinguish some of these. https://github.com/willow-ahrens/finch-tensor/pull/28, https://github.com/willow-ahrens/finch-tensor/pull/32max
,mean
,min
,std
,var
unique_all
,unique_counts
,unique_inverse
,unique_values
all
,any
concat
expand_dims
flip
reshape
roll
squeeze
stack
argsort
/sort
broadcast_arrays
broadcast_to
can_cast
/finfo
/iinfo
/result_type
bitwise_and
/bitwise_left_shift
/bitwise_invert
/bitwise_or
/bitwise_right_shift
/bitwise_xor
linalg
namespace(I copied those from the benchmark suite. If something turns out to be unfeasible we can drop it.)
linalg.vecdot
-finch.tensordot
linalg.vector_norm
linalg.trace
linalg.tensordot
- implemented in the main namespace. Just needs an aliaslinalg.outer
linalg.cross
linalg.matrix_transpose
linalg.matrix_power
linalg.matrix_norm
- fornuc
or2
, call external library. Forfro
,inf
,1
,0
,-1
,-inf
, calljl.norm
.xp.linalg.diagonal
-finch.tensordot(finch.diagmask(), mtx)
xp.linalg.cholesky
- call CHOLMOD or somethingxp.linalg.det
- call EIGEN or somethingxp.linalg.eigh
- call external libraryxp.linalg.eigvalsh
- call external libraryxp.linalg.inv
- call external library -scipy.sparse.linalg.inv
xp.linalg.matrix_rank
- call external libraryxp.linalg.pinv
- call external libraryTensor
methods and attributesTensor.to_device()
-finch.moveto
miscellaneous