metagraph-dev / mlir-graphblas

MLIR tools and dialect for GraphBLAS
https://mlir-graphblas.readthedocs.io/en/latest/
Apache License 2.0
15 stars 6 forks source link

call* functions in GraphBLASUtils.cpp cause errors when called on fixed-size tensors of different size #129

Open paul-tqh-nguyen opened 3 years ago

paul-tqh-nguyen commented 3 years ago

This (let's call it Example A) lowers through graphblas-opt --graphblas-lower with no problem:

#CSR64 = #sparse_tensor.encoding<{
  dimLevelType = [ "dense", "compressed" ],
  dimOrdering = affine_map<(i,j) -> (i,j)>,
  pointerBitWidth = 64,
  indexBitWidth = 64
}>

module {

    func @matrix_select_triu_100(%sparse_tensor: tensor<100x100xf64, #CSR64>) -> tensor<100x100xf64, #CSR64> {
        %answer = graphblas.matrix_select %sparse_tensor { selectors = ["triu"] } : tensor<100x100xf64, #CSR64> to tensor<100x100xf64, #CSR64>
        return %answer : tensor<100x100xf64, #CSR64>
    }

}

This (let's call it Example B) also lowers through graphblas-opt --graphblas-lower with no problem:

#CSR64 = #sparse_tensor.encoding<{
  dimLevelType = [ "dense", "compressed" ],
  dimOrdering = affine_map<(i,j) -> (i,j)>,
  pointerBitWidth = 64,
  indexBitWidth = 64
}>

module {

    func @matrix_select_triu_200(%sparse_tensor: tensor<200x200xf64, #CSR64>) -> tensor<200x200xf64, #CSR64> {
        %answer = graphblas.matrix_select %sparse_tensor { selectors = ["triu"] } : tensor<200x200xf64, #CSR64> to tensor<200x200xf64, #CSR64>
        return %answer : tensor<200x200xf64, #CSR64>
    }

}

This (let's call it Example C) does not:


#CSR64 = #sparse_tensor.encoding<{
  dimLevelType = [ "dense", "compressed" ],
  dimOrdering = affine_map<(i,j) -> (i,j)>,
  pointerBitWidth = 64,
  indexBitWidth = 64
}>

module {

    func @matrix_select_triu_100(%sparse_tensor: tensor<100x100xf64, #CSR64>) -> tensor<100x100xf64, #CSR64> {
        %answer = graphblas.matrix_select %sparse_tensor { selectors = ["triu"] } : tensor<100x100xf64, #CSR64> to tensor<100x100xf64, #CSR64>
        return %answer : tensor<100x100xf64, #CSR64>
    }

    func @matrix_select_triu_200(%sparse_tensor: tensor<200x200xf64, #CSR64>) -> tensor<200x200xf64, #CSR64> {
        %answer = graphblas.matrix_select %sparse_tensor { selectors = ["triu"] } : tensor<200x200xf64, #CSR64> to tensor<200x200xf64, #CSR64>
        return %answer : tensor<200x200xf64, #CSR64>
    }

}

This problem stems from the following:

A valid (partial) solution would be to:

This is only a partial solution since this same problem might occur if we tried use callDupTensor to duplicate an f64 tensor and i64 tensor in the same MLIR module or if we tried to use callDupTensor to duplicate a CSR tensor and CSC tensor in the same MLIR module.

jim22k commented 2 years ago

This should no longer be a problem because we always declare top-level C calls using !llvm.ptr<i8>. That change (PR#166) was done specifically to handle the case of multiple dtypes (f64 vs i64) and different rank (?x64 vs ?x?xf64).

I haven't checked if it works with fixed dimensions, but there's no reason it shouldn't.