tensorflow / mlir

"Multi-Level Intermediate Representation" Compiler Infrastructure
1.73k stars 257 forks source link

[spirv] Running canonicalize on spirv-module leads to an assertion failure #269

Closed MaheshRavishankar closed 4 years ago

MaheshRavishankar commented 4 years ago

Running canonicalization on this module leads to an assertion failure

spv.module "Logical" "GLSL450" {
  spv.globalVariable @__builtin_var_NumWorkgroups__ built_in("NumWorkgroups") : !spv.ptr<vector<3xi32>, Input>
  spv.globalVariable @__builtin_var_WorkgroupId__ built_in("WorkgroupId") : !spv.ptr<vector<3xi32>, Input>
  func @fmul_kernel() {
    %3 = spv.constant 12 : i32
    %4 = spv.constant 32 : i32
    %5 = spv.constant 4 : i32
    %6 = spv._address_of @__builtin_var_WorkgroupId__ : !spv.ptr<vector<3xi32>, Input>
    %7 = spv.Load "Input" %6 : vector<3xi32>
    %8 = spv.CompositeExtract %7[0 : i32] : vector<3xi32>
    %9 = spv._address_of @__builtin_var_WorkgroupId__ : !spv.ptr<vector<3xi32>, Input>
    %10 = spv.Load "Input" %9 : vector<3xi32>
    %11 = spv.CompositeExtract %10[1 : i32] : vector<3xi32>
    %18 = spv._address_of @__builtin_var_NumWorkgroups__ : !spv.ptr<vector<3xi32>, Input>
    %19 = spv.Load "Input" %18 : vector<3xi32>
    %20 = spv.CompositeExtract %19[0 : i32] : vector<3xi32>
    %21 = spv._address_of @__builtin_var_NumWorkgroups__ : !spv.ptr<vector<3xi32>, Input>
    %22 = spv.Load "Input" %21 : vector<3xi32>
    %23 = spv.CompositeExtract %22[1 : i32] : vector<3xi32>
    %30 = spv.IMul %11, %4 : i32
    %31 = spv.IMul %23, %4 : i32
    spv.loop {
      spv.Branch ^bb1(%30 : i32)
    ^bb1(%32: i32):     // 2 preds: ^bb0, ^bb2
      %33 = spv.SLessThan %32, %3 : i32
      spv.BranchConditional %33, ^bb2, ^bb3
    ^bb2:       // pred: ^bb1
      %34 = spv.IMul %8, %5 : i32
      %35 = spv.IMul %20, %5 : i32
      spv.loop {
        spv.Branch ^bb1(%34 : i32)
      ^bb1(%37: i32):   // 2 preds: ^bb0, ^bb2
        %38 = spv.SLessThan %37, %5 : i32
        spv.BranchConditional %38, ^bb2, ^bb3
      ^bb2:     // pred: ^bb1
        %48 = spv.IAdd %37, %35 : i32
        spv.Branch ^bb1(%48 : i32)
      ^bb3:     // pred: ^bb1
        spv._merge
      }
      %36 = spv.IAdd %32, %31 : i32
      spv.Branch ^bb1(%36 : i32)
    ^bb3:       // pred: ^bb1
      spv._merge
    }
    spv.Return
  }
  spv.EntryPoint "GLCompute" @fmul_kernel, @__builtin_var_WorkgroupId__, @__builtin_var_NumWorkgroups__
  spv.ExecutionMode @fmul_kernel "LocalSize", 32, 1, 1
} attributes {capabilities = ["Shader"], extensions = ["SPV_KHR_storage_buffer_storage_class"]}

I get the following assert message

include/mlir/IR/Attributes.h:1331 in bool mlir::Attribute::isa() const [U = mlir::ElementsAttr]: impl && "isa<> used on a null attribute."
denis0x0D commented 4 years ago

@MaheshRavishankar thanks for report, this is looks like my fault, I'll take it in couple of hours.

denis0x0D commented 4 years ago

@MaheshRavishankar can you please provide the flags with which the error appears? I just cannot reproduce it on my machine and mlir on master with flags:

mlir-opt module.mlir -pass-pipeline='module(canonicalize)'
mlir-opt module.mlir -pass-pipeline='func(canonicalize)'

By the way, it seems like only few ops from SPIR-V dialect have canonicalizer: spv.AccessChain, spv.Bitcast, spv.selection, spv.LogicalNot and it does not look like this example contains those ops. Thanks.

MaheshRavishankar commented 4 years ago

Thanks @denis0x0D . I just ran using -canonicalize flag. I haven't triaged it at all. So it might not be related to spv canonicalizations specifically

denis0x0D commented 4 years ago

@MaheshRavishankar oh, now I see the same, seems like a check for null attribute is needed in fold for CompositeExtract.

#10 0x0000561dc79a5748 bool mlir::Attribute::isa<mlir::ElementsAttr>() const /home/llvm-project/llvm/projects/mlir/include/mlir/IR/Attributes.h:1341:20
#11 0x0000561dc7ab2792 mlir::ElementsAttr mlir::Attribute::dyn_cast<mlir::ElementsAttr>() const /home/llvm-project/llvm/projects/mlir/include/mlir/IR/Attributes.h:1344:40
#12 0x0000561dc7a1bd2c extractCompositeElement(mlir::Attribute, llvm::ArrayRef<unsigned int>) /home/llvm-project/llvm/projects/mlir/lib/Dialect/SPIRV/SPIRVOps.cpp:342:55
#13 0x0000561dc7a21621 mlir::spirv::CompositeExtractOp::fold(llvm::ArrayRef<mlir::Attribute>) /home/llvm-project/llvm/projects/mlir/lib/Dialect/SPIRV/SPIRVOps.cpp:1052:58

https://github.com/tensorflow/mlir/blob/a1e4422bf34c74b682b5f63465693974d7aa8028/lib/Dialect/SPIRV/SPIRVOps.cpp#L1045 Thanks!

denis0x0D commented 4 years ago

By the way, I'm not super familiar about how folding works in MLIR, and it would be good if I can take this bug and fix tomorrow, if it can wait, thanks!

antiagainst commented 4 years ago

Thanks for the investigation, @denis0x0D! Yes this can wait so if you'd like to take a look and fix it, that'd be awesome! :)

antiagainst commented 4 years ago

Thanks for the investigation, @denis0x0D! Yes this can wait so if you'd like to take a look and fix it, that'd be awesome! :)

antiagainst commented 4 years ago

Fixed via #281. Thanks Denis!