iree-org / iree

A retargetable MLIR-based machine learning compiler and runtime toolkit.
http://iree.dev/
Apache License 2.0
2.85k stars 614 forks source link

iree-util-hoist-into-globals produces globals without value when pass is run more than once #16336

Open ramiro050 opened 9 months ago

ramiro050 commented 9 months ago

What happened?

If the pass iree-util-hoist-into-globals is run more than once, it produces hoisted globals without a value. When consteval is then run, it produces the error internal error: jit global source initialization order. global hoisted_0 has no value.

Steps to reproduce your issue

Running iree-opt --iree-util-hoist-into-globals on

// -----// IR Dump Before HoistIntoGlobals (iree-util-hoist-into-globals) //----- //
#map = affine_map<(d0, d1) -> (d1, d0)>
#map1 = affine_map<(d0, d1) -> (d0, d1)>
module @jit_export_func attributes {iree.fixedpoint.iteration = 1 : index, iree.fixedpoint.modified} {
  util.global private @hoisted : tensor<4xi8>
  func.func @main() -> () attributes {iree.abi.stub} {
    return
  }
  util.initializer attributes {iree.compiler.consteval} {
    %cst = arith.constant dense<"0x0000000000000000"> : tensor<2x4xi4>
    %0 = flow.tensor.bitcast %cst : tensor<2x4xi4> -> tensor<4xi8>
    util.global.store %0, @hoisted : tensor<4xi8>
    util.return
  }
}

results in

module @jit_export_func attributes {iree.fixedpoint.iteration = 1 : index, iree.fixedpoint.modified} {
  util.global private @hoisted_0 : tensor<4xi8>
  util.global private @hoisted : tensor<4xi8>
  func.func @main() attributes {iree.abi.stub} {
    return
  }
  util.initializer attributes {iree.compiler.consteval} {
    %hoisted_0 = util.global.load @hoisted_0 : tensor<4xi8>
    util.global.store %hoisted_0, @hoisted : tensor<4xi8>
    util.return
  }
  util.initializer attributes {iree.compiler.consteval} {
    %cst = arith.constant dense<0> : tensor<2x4xi4>
    %0 = flow.tensor.bitcast %cst : tensor<2x4xi4> -> tensor<4xi8>
    util.global.store %0, @hoisted_0 : tensor<4xi8>
    util.return
  }
}

Then, running iree-opt --iree-consteval-jit-globals on the above results in

file.mlir:11:10: warning: skipping consteval initializer: unsupported type for current jit configuration: 'tensor<2x4xi4>'
    %0 = flow.tensor.bitcast %cst : tensor<2x4xi4> -> tensor<4xi8>
         ^
file.mlir:9:3: error: internal error: jit global source initialization order. global hoisted_0 has no value
  util.initializer attributes {iree.compiler.consteval} {
  ^

What component(s) does this issue relate to?

Compiler

Version information

10fd98b89616e7f1ae70b8c4036c2c715f1f5bc0

Additional context

Note: example MLIR obtained from iree-reduce + manual reduction/simplification of an internal model.

qedawkins commented 9 months ago

Ah this is not good. #15714 might help in this particular case because it is doubly hoisting the bitcast, but I'm not sure that's the problem here in general. We probably shouldn't be hoisting operations that are already in util.initializers that are also marked as iree.compiler.const_eval.

ramiro050 commented 9 months ago

Let me try your patch and see if it fixes it.

ramiro050 commented 9 months ago

You are right, it does fix it for this particular case. However, I'm still getting the error when compiling my model, so I'll be back with a different MLIR variant.

ramiro050 commented 9 months ago

Running iree-opt --iree-util-hoist-into-globals --iree-consteval-jit-globals on

// -----// IR Dump Before HoistIntoGlobals (iree-util-hoist-into-globals) //----- //
#map = affine_map<(d0, d1) -> (d1, d0)>
#map1 = affine_map<(d0, d1) -> (d0, d1)>
module @jit_export_func attributes {iree.fixedpoint.iteration = 1 : index, iree.fixedpoint.modified} {
  util.global private @hoisted : tensor<4xi8>
  util.initializer attributes {iree.compiler.consteval} {
    %cst = arith.constant dense<"0x0000000000000000"> : tensor<2x4xi4>
    %0 = tensor.empty() : tensor<4x2xi4>
    %1 = linalg.generic {indexing_maps = [#map, #map1], iterator_types = ["parallel", "parallel"]} ins(%cst : tensor<2x4xi4>) outs(%0 : tensor<4x2xi4>) {
    ^bb0(%in: i4, %out: i4):
      linalg.yield %in : i4
    } -> tensor<4x2xi4>
    %2 = flow.tensor.bitcast %1 : tensor<4x2xi4> -> tensor<4xi8>
    util.global.store %2, @hoisted : tensor<4xi8>
    util.return
  }
}

results in

file.mlir:9:10: warning: skipping consteval initializer: unsupported type for current jit configuration: 'tensor<2x4xi4>'
    %1 = linalg.generic {indexing_maps = [#map, #map1], iterator_types = ["parallel", "parallel"]} ins(%cst : tensor<2x4xi4>) outs(%0 : tensor<4x2xi4>) {
         ^
file.mlir:6:3: error: internal error: jit global source initialization order. global "hoisted_0" has no value
  util.initializer attributes {iree.compiler.consteval} {
  ^
ramiro050 commented 9 months ago

@qedawkins, do you have any ideas for how to fix this?

qedawkins commented 9 months ago

ok so a little more investigation, the i4 types aren't relevant here. The same error occurs if those are instead i8. Otherwise probably the easiest fix for now is to disable hoisting out of util.initializers. If I take the problematic output of hoist-into-globals it looks like what's happening is const-expr-hoisting doesn't think about initializer order. For example, if I swap the order of the initializers it passes const-eval fine

#map = affine_map<(d0, d1) -> (d1, d0)>
#map1 = affine_map<(d0, d1) -> (d0, d1)>
module @jit_export_func attributes {iree.fixedpoint.iteration = 1 : index, iree.fixedpoint.modified} {
  util.global private @hoisted_0 : tensor<4x2xi8>
  util.global private @hoisted : tensor<8xi8>
  util.initializer attributes {iree.compiler.consteval} {
    %cst = arith.constant dense<0> : tensor<2x4xi8>
    %0 = tensor.empty() : tensor<4x2xi8>
    %1 = linalg.generic {indexing_maps = [#map, #map1], iterator_types = ["parallel", "parallel"]} ins(%cst : tensor<2x4xi8>) outs(%0 : tensor<4x2xi8>) {
    ^bb0(%in: i8, %out: i8):
      linalg.yield %in : i8
    } -> tensor<4x2xi8>
    util.global.store %1, @hoisted_0 : tensor<4x2xi8>
    util.return
  }
  util.initializer attributes {iree.compiler.consteval} {
    %hoisted_0 = util.global.load @hoisted_0 : tensor<4x2xi8>
    %0 = flow.tensor.bitcast %hoisted_0 : tensor<4x2xi8> -> tensor<8xi8>
    util.global.store %0, @hoisted : tensor<8xi8>
    util.return
  }
}
module @jit_export_func attributes {iree.fixedpoint.iteration = 1 : index, iree.fixedpoint.modified} {
  util.global private @hoisted_0 = dense<0> : tensor<4x2xi8>
  util.global private @hoisted = dense<0> : tensor<8xi8>
}

But looking at this, the fact that we now have two copies of the same global is problematic so we likely shouldn't have hoisted out of the initializer to begin with (or we could have a pattern for collapsing initializer chains).

benvanik commented 9 months ago

yeah, initializer order is tricky to handle in isolation - could run the combine-initializers pass first and require there's only one initializer prior to doing any hoisting to at least make it simpler

(we still would want to consteval stuff in initializers, as that's shifting things from startup time to compile time)

qedawkins commented 9 months ago

oh, if combine-initializers already exists that helps. I think we should disable hoisting out of regions already marked as {iree.compiler.consteval} though because any hoisting we do on those initializers is potential double work (and also the initializers that we hoist from now are no longer evaluatable in isolation).

If I had to guess how this is happening, this warning tells us that const-eval got skipped

file.mlir:11:10: warning: skipping consteval initializer: unsupported type for current jit configuration: 'tensor<2x4xi4>'
    %0 = flow.tensor.bitcast %cst : tensor<2x4xi4> -> tensor<4xi8>

and then on the next iteration of the fixed point, we end up with util.initializer attributes {iree.compiler.consteval} being passed in to hoisting.

benvanik commented 9 months ago

yeah give iree-util-combine-initializers a try (it just runs down module order and merges them all) - it likely ignores the consteval stuff, though - consteval using attributes on initializers is likely the issue and something that we've needed to fix for awhile (not sure the right solution, but what's there is kinda sketchy for these reasons :)