Open sylvestre opened 2 years ago
@llvm/issue-subscribers-mlir
I've seen this i686 failure as well. We currently don't ship mlir tools in fedora because of these failures (they don't occur when not building tools).
I disabled the testsuite on i386 to avoid this
I took a brief look at this, and in the generated MLIR (presumably the done by the convert-memref-to-llvm pass) I already see things like this:
llvm.func @malloc(i64) -> !llvm.ptr<i8>
llvm.func @free(!llvm.ptr<i8>)
llvm.func @aligned_alloc(i64, i64) -> !llvm.ptr<i8>
So it seems like at least this part of MLIR has a hardcoded assumption that it runs on a 64-bit architecture.
Okay, apparently MLIR has a concept of an "index type" that should handle this. The memref dialect does respect the index type, e.g. here: https://github.com/llvm/llvm-project/blob/de8e0a439777014d7d85007c379579e58bba2efe/mlir/lib/Conversion/MemRefToLLVM/AllocLikeConversion.cpp#L126
The async dialect hardcodes i64 for all sizes: https://github.com/llvm/llvm-project/blob/de8e0a439777014d7d85007c379579e58bba2efe/mlir/lib/Conversion/AsyncToLLVM/AsyncToLLVM.cpp#L379-L381
Something I don't get yet is how the index type is determined. It looks like even the malloc created by memref also uses i64 on i686. I'd have expected it to use i32.
Looks like the index type is part of LowerToLLVMOptions and determined either from datalayout or an index bitwidth override.
But how is a generic mlir-opt call that is intended for use with mlir-cpu-runner to know the right option for the target? I don't see any obvious way it could use the host index width -- and even manually passing it in seems like a big hassle, as one would have to pass an indexBitwidth
option to a bunch of passes.
log: https://llvm-jenkins.debian.net/job/llvm-toolchain-binaries/architecture=i386,distribution=unstable,label=i386/680/console
It freezes the execution of the test suite.
(I am not 100% that it is this test causing the failure of the testsuite)