Open mtsokol opened 2 months ago
I would say all of this needs to be in one function, because optimising across function boundaries is difficult.
@nullplay so maybe let's first establish the MLIR code that would perform addition, substitution, reductions etc. using looplets, given that the input is a:
tensor< dim0 x dim1 x ... x data_type[, sparse_format]>
Where access to underlying memrefs is:
Hi @nullplay,
I wanted to start a discussion on Finch-MLIR <-> MLIR tensors API. In https://github.com/pydata/sparse/tree/main/sparse/mlir_backend we have an initial Tensor class which provides constructors for MLIR sparse/dense tensors:
Here are examples of MLIR tensors that can be created and will be passed to the API exposed by
Finch-mlir
:One of the first questions is: What would be the API to call some basic operations, let's say unary and binary elemwise operations, matmul (tensordot?) and reductions? Like: