Open paul-tqh-nguyen opened 3 years ago
This should no longer be a problem because we always declare top-level C calls using !llvm.ptr<i8>
. That change (PR#166) was done specifically to handle the case of multiple dtypes (f64 vs i64) and different rank (?x64 vs ?x?xf64).
I haven't checked if it works with fixed dimensions, but there's no reason it shouldn't.
This (let's call it Example A) lowers through
graphblas-opt --graphblas-lower
with no problem:This (let's call it Example B) also lowers through
graphblas-opt --graphblas-lower
with no problem:This (let's call it Example C) does not:
This problem stems from the following:
builtin.func private @dup_matrix(tensor<100x100xf64, #CSX64>) -> tensor<100x100xf64, #CSX64>
builtin.func private @dup_matrix(tensor<200x200xf64, #CSX64>) -> tensor<200x200xf64, #CSX64>
A valid (partial) solution would be to:
builtin.func private @dup_matrix(tensor<?x?xf64, #CSX64>) -> tensor<?x?xf64, #CSX64>
call*
function fromGraphBLASUtils.cpp
is used in the lowering code, the function should inserttensor.cast
before and after the MLIR assembly that callsdup_matrix
(or whatever function name is relevant) wheredup_matrix
takes atensor<?x?xf64, #CSX64>
.This is only a partial solution since this same problem might occur if we tried use
callDupTensor
to duplicate an f64 tensor and i64 tensor in the same MLIR module or if we tried to usecallDupTensor
to duplicate a CSR tensor and CSC tensor in the same MLIR module.