iree-org / iree

A retargetable MLIR-based machine learning compiler and runtime toolkit.
http://iree.dev/
Apache License 2.0
2.82k stars 608 forks source link

'stream.async.slice' op has invalid Read access range [0 to 8 for 8] of resource %0 with size 0; length > resource size #18926

Open pdhirajkumarprasad opened 6 days ago

pdhirajkumarprasad commented 6 days ago

What happened?

For the given IR

module {
  func.func @torch_jit( %arg1: !torch.vtensor<[4],si64>, %arg3: !torch.vtensor<[?,?,?,?],f32>) -> !torch.vtensor<[?,?,?,?],f32>  attributes {torch.onnx_meta.ir_version = 7 : si64, torch.onnx_meta.opset_version = 21 : si64, torch.onnx_meta.producer_name = "pytorch", torch.onnx_meta.producer_version = "1.12.1"} {
    %none = torch.constant.none
    %1 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<_onnx__Sub_3847> : tensor<si64>} : () -> !torch.vtensor<[],si64> 
    %2 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<0> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %3 = torch.operator "onnx.Shape"(%arg1) : (!torch.vtensor<[4],si64>) -> !torch.vtensor<[1],si64> 
    %4 = torch.operator "onnx.Gather"(%3, %2) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[1],si64> 
    %5 = torch.operator "onnx.Sub"(%1, %4) : (!torch.vtensor<[],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[1],si64> 
    %6 = torch.operator "onnx.Cast"(%arg1) {torch.onnx.to = 7 : si64} : (!torch.vtensor<[4],si64>) -> !torch.vtensor<[4],si64> 
    %7 = torch.operator "onnx.ConstantOfShape"(%5) {torch.onnx.value = dense<0> : tensor<1xsi64>} : (!torch.vtensor<[1],si64>) -> !torch.vtensor<[?],si64> 
    %8 = torch.operator "onnx.Concat"(%6, %7) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[4],si64>, !torch.vtensor<[?],si64>) -> !torch.vtensor<[?],si64> 
    %9 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__22> : tensor<2xsi64>} : () -> !torch.vtensor<[2],si64> 
    %10 = torch.operator "onnx.Reshape"(%8, %9) : (!torch.vtensor<[?],si64>, !torch.vtensor<[2],si64>) -> !torch.vtensor<[?,2],si64> 
    %11 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<0> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %12 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<-1> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %13 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__25> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %14 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<-1> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %15 = torch.operator "onnx.Slice"(%10, %12, %13, %11, %14) : (!torch.vtensor<[?,2],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[?,2],si64> 
    %16 = torch.operator "onnx.Transpose"(%15) {torch.onnx.perm = [1 : si64, 0 : si64]} : (!torch.vtensor<[?,2],si64>) -> !torch.vtensor<[2,?],si64> 
    %17 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<-1> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %18 = torch.operator "onnx.Reshape"(%16, %17) : (!torch.vtensor<[2,?],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[?],si64> 
    %19 = torch.operator "onnx.Cast"(%18) {torch.onnx.to = 7 : si64} : (!torch.vtensor<[?],si64>) -> !torch.vtensor<[?],si64> 
    %20 = torch.operator "onnx.Pad"(%arg3, %19, %none) {torch.onnx.mode = "constant"} : (!torch.vtensor<[?,?,?,?],f32>, !torch.vtensor<[?],si64>, !torch.none) -> !torch.vtensor<[?,?,?,?],f32> 
    return %20 : !torch.vtensor<[?,?,?,?],f32>
  }
}

{-#
  dialect_resources: {
    builtin: {
      _onnx__Sub_3847: "0x080000000800000000000000",
      __22: "0x08000000FFFFFFFFFFFFFFFF0200000000000000",
      __25: "0x080000000100000000000080"
    }
  }
#-}

getting error as

model.torch_onnx.mlir:23:11: error: 'stream.async.slice' op has invalid Read access range [0 to 8 for 8] of resource %0 with size 0; length > resource size
    %20 = torch.operator "onnx.Pad"(%arg3, %19, %none) {torch.onnx.mode = "constant"} : (!torch.vtensor<[?,?,?,?],f32>, !torch.vtensor<[?],si64>, !torch.none) -> !torch.vtensor<[?,?,?,?],f32> 
          ^

Steps to reproduce your issue

command:

 iree-compile --iree-hal-target-backends=llvm-cpu -o abc.vmfb model.torch_onnx.mlir

log with '--mlir-print-ir-after-all --mlir-print-ir-before-all --mlir-disable-threading --mlir-elide-elementsattrs-if-larger=4' attached

dump.log

What component(s) does this issue relate to?

Compiler

Version information

No response

Additional context

No response

vivekkhandelwal1 commented 6 days ago

While working on debugging this issue, I found out that on removing the patch https://github.com/llvm/torch-mlir/commit/a83e106f92453238bc4a949db718cc29152ddf50 from IREE, the issue didn't happen. It seems due to changes in the ScalarizeShapes pass the issue is happening.

@zjgarvey Can you please take a look at your patch and see what is causing this?

CC: @pdhirajkumarprasad

zjgarvey commented 6 days ago

This should get resolved by some folders added in a torch-mlir patch that still needs to get bumped in. I'll test it out and update.

zjgarvey commented 6 days ago

Okay, this might not fold, so I'll see if I can scalarize the shapes here. This is a better reproducer, since we can't scalarize a func.func operand.

module {
  func.func @torch_jit(%arg0: !torch.vtensor<[?,144,?,?],f32>) -> !torch.vtensor<[?,?,?,?],f32> attributes {torch.onnx_meta.ir_version = 7 : si64, torch.onnx_meta.opset_version = 21 : si64, torch.onnx_meta.producer_name = "pytorch", torch.onnx_meta.producer_version = "1.12.1"} {
    %0 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<_onnx__Sub_1132> : tensor<si64>} : () -> !torch.vtensor<[],si64> 
    %1 = torch.operator "onnx.Identity"(%0) : (!torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> 
    %2 = torch.operator "onnx.Shape"(%arg0) : (!torch.vtensor<[?,144,?,?],f32>) -> !torch.vtensor<[4],si64> 
    %3 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__33> : tensor<si64>} : () -> !torch.vtensor<[],si64> 
    %4 = torch.operator "onnx.Gather"(%2, %3) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[4],si64>, !torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> 
    %5 = torch.operator "onnx.Shape"(%arg0) : (!torch.vtensor<[?,144,?,?],f32>) -> !torch.vtensor<[4],si64> 
    %6 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__34> : tensor<si64>} : () -> !torch.vtensor<[],si64> 
    %7 = torch.operator "onnx.Gather"(%5, %6) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[4],si64>, !torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> 
    %8 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__35> : tensor<si64>} : () -> !torch.vtensor<[],si64> 
    %9 = torch.operator "onnx.Sub"(%8, %4) : (!torch.vtensor<[],si64>, !torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> 
    %10 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__36> : tensor<si64>} : () -> !torch.vtensor<[],si64> 
    %11 = torch.operator "onnx.Sub"(%10, %7) : (!torch.vtensor<[],si64>, !torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> 
    %12 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__37> : tensor<si64>} : () -> !torch.vtensor<[],si64> 
    %13 = torch.operator "onnx.Div"(%11, %12) : (!torch.vtensor<[],si64>, !torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> 
    %14 = torch.operator "onnx.Cast"(%13) {torch.onnx.to = 7 : si64} : (!torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> 
    %15 = torch.operator "onnx.Cast"(%14) {torch.onnx.to = 7 : si64} : (!torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> 
    %16 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__38> : tensor<si64>} : () -> !torch.vtensor<[],si64> 
    %17 = torch.operator "onnx.Div"(%11, %16) : (!torch.vtensor<[],si64>, !torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> 
    %18 = torch.operator "onnx.Cast"(%17) {torch.onnx.to = 7 : si64} : (!torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> 
    %19 = torch.operator "onnx.Cast"(%18) {torch.onnx.to = 7 : si64} : (!torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> 
    %20 = torch.operator "onnx.Sub"(%11, %19) : (!torch.vtensor<[],si64>, !torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> 
    %21 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__39> : tensor<si64>} : () -> !torch.vtensor<[],si64> 
    %22 = torch.operator "onnx.Div"(%9, %21) : (!torch.vtensor<[],si64>, !torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> 
    %23 = torch.operator "onnx.Cast"(%22) {torch.onnx.to = 7 : si64} : (!torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> 
    %24 = torch.operator "onnx.Cast"(%23) {torch.onnx.to = 7 : si64} : (!torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> 
    %25 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__40> : tensor<si64>} : () -> !torch.vtensor<[],si64> 
    %26 = torch.operator "onnx.Div"(%9, %25) : (!torch.vtensor<[],si64>, !torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> 
    %27 = torch.operator "onnx.Cast"(%26) {torch.onnx.to = 7 : si64} : (!torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> 
    %28 = torch.operator "onnx.Cast"(%27) {torch.onnx.to = 7 : si64} : (!torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> 
    %29 = torch.operator "onnx.Sub"(%9, %28) : (!torch.vtensor<[],si64>, !torch.vtensor<[],si64>) -> !torch.vtensor<[],si64> 
    %30 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__41> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %31 = torch.operator "onnx.Unsqueeze"(%15, %30) : (!torch.vtensor<[],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[1],si64> 
    %32 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__42> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %33 = torch.operator "onnx.Unsqueeze"(%20, %32) : (!torch.vtensor<[],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[1],si64> 
    %34 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__43> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %35 = torch.operator "onnx.Unsqueeze"(%24, %34) : (!torch.vtensor<[],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[1],si64> 
    %36 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__44> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %37 = torch.operator "onnx.Unsqueeze"(%29, %36) : (!torch.vtensor<[],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[1],si64> 
    %38 = torch.operator "onnx.Concat"(%31, %33, %35, %37) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[4],si64> 
    %39 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__45> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %40 = torch.operator "onnx.Unsqueeze"(%15, %39) : (!torch.vtensor<[],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[1],si64> 
    %41 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__46> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %42 = torch.operator "onnx.Unsqueeze"(%20, %41) : (!torch.vtensor<[],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[1],si64> 
    %43 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__47> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %44 = torch.operator "onnx.Unsqueeze"(%24, %43) : (!torch.vtensor<[],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[1],si64> 
    %45 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__48> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %46 = torch.operator "onnx.Unsqueeze"(%29, %45) : (!torch.vtensor<[],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[1],si64> 
    %47 = torch.operator "onnx.Concat"(%40, %42, %44, %46) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[4],si64> 
    %48 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__49> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %49 = torch.operator "onnx.Shape"(%38) : (!torch.vtensor<[4],si64>) -> !torch.vtensor<[1],si64> 
    %50 = torch.operator "onnx.Gather"(%49, %48) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[1],si64> 
    %51 = torch.operator "onnx.Sub"(%1, %50) : (!torch.vtensor<[],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[1],si64> 
    %52 = torch.operator "onnx.Cast"(%47) {torch.onnx.to = 7 : si64} : (!torch.vtensor<[4],si64>) -> !torch.vtensor<[4],si64> 
    %53 = torch.operator "onnx.ConstantOfShape"(%51) {torch.onnx.value = dense_resource<__50> : tensor<1xsi64>} : (!torch.vtensor<[1],si64>) -> !torch.vtensor<[?],si64> 
    %54 = torch.operator "onnx.Concat"(%52, %53) {torch.onnx.axis = 0 : si64} : (!torch.vtensor<[4],si64>, !torch.vtensor<[?],si64>) -> !torch.vtensor<[?],si64> 
    %55 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__51> : tensor<2xsi64>} : () -> !torch.vtensor<[2],si64> 
    %56 = torch.operator "onnx.Reshape"(%54, %55) : (!torch.vtensor<[?],si64>, !torch.vtensor<[2],si64>) -> !torch.vtensor<[?,2],si64> 
    %57 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__52> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %58 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__53> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %59 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__54> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %60 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__55> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %61 = torch.operator "onnx.Slice"(%56, %58, %59, %57, %60) : (!torch.vtensor<[?,2],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[?,2],si64> 
    %62 = torch.operator "onnx.Transpose"(%61) {torch.onnx.perm = [1 : si64, 0 : si64]} : (!torch.vtensor<[?,2],si64>) -> !torch.vtensor<[2,?],si64> 
    %63 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__56> : tensor<1xsi64>} : () -> !torch.vtensor<[1],si64> 
    %64 = torch.operator "onnx.Reshape"(%62, %63) : (!torch.vtensor<[2,?],si64>, !torch.vtensor<[1],si64>) -> !torch.vtensor<[?],si64> 
    %65 = torch.operator "onnx.Cast"(%64) {torch.onnx.to = 7 : si64} : (!torch.vtensor<[?],si64>) -> !torch.vtensor<[?],si64> 
    %66 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__57> : tensor<f32>} : () -> !torch.vtensor<[],f32> 
    %67 = torch.operator "onnx.Pad"(%arg0, %65, %66) {torch.onnx.mode = "constant"} : (!torch.vtensor<[?,144,?,?],f32>, !torch.vtensor<[?],si64>, !torch.vtensor<[],f32>) -> !torch.vtensor<[?,?,?,?],f32> 
    return %67 : !torch.vtensor<[?,?,?,?],f32>
  }
}

{-#
  dialect_resources: {
    builtin: {
      _onnx__Sub_1132: "0x080000000800000000000000",
      __33: "0x080000000200000000000000",
      __34: "0x080000000300000000000000",
      __35: "0x080000003B00000000000000",
      __36: "0x080000003B00000000000000",
      __37: "0x080000000200000000000000",
      __38: "0x080000000200000000000000",
      __39: "0x080000000200000000000000",
      __40: "0x080000000200000000000000",
      __41: "0x080000000000000000000000",
      __42: "0x080000000000000000000000",
      __43: "0x080000000000000000000000",
      __44: "0x080000000000000000000000",
      __45: "0x080000000000000000000000",
      __46: "0x080000000000000000000000",
      __47: "0x080000000000000000000000",
      __48: "0x080000000000000000000000",
      __49: "0x080000000000000000000000",
      __50: "0x080000000000000000000000",
      __51: "0x08000000FFFFFFFFFFFFFFFF0200000000000000",
      __52: "0x080000000000000000000000",
      __53: "0x08000000FFFFFFFFFFFFFFFF",
      __54: "0x080000000100000000000080",
      __55: "0x08000000FFFFFFFFFFFFFFFF",
      __56: "0x08000000FFFFFFFFFFFFFFFF",
      __57: "0x0800000000000000"
    }
  }
#-}
zjgarvey commented 5 days ago

Working on this now in branch https://github.com/zjgarvey/torch-mlir/tree/scalarize_dynamic_pad. When this is finished, I'll put up a PR.

zjgarvey commented 4 days ago

I believe this will be resolved with https://github.com/llvm/torch-mlir/pull/3838