iree-org / iree

A retargetable MLIR-based machine learning compiler and runtime toolkit.
http://iree.dev/
Apache License 2.0
2.56k stars 572 forks source link

Finish LinalgExt operation support on all backends #16886

Open MaheshRavishankar opened 5 months ago

MaheshRavishankar commented 5 months ago

One of the issues faced during SDXL support (https://github.com/openxla/iree/pull/16854) was the missing support for operations added in LinalgExt on all codegen backends i.e, CPU, SPIRV and LLVMGPU.

Main Issues

1) iree_linalg_ext.attention https://github.com/openxla/iree/blob/2cdf1452bb2f877baf8723ab567363094bea10bd/compiler/src/iree/compiler/Dialect/LinalgExt/IR/LinalgExtOps.td#L514 The main issue here was that the TileAndDecomposeAttentionPass is not really tested on any end-to-end compilation path. An efficient compilation of this op was built up using transform dialect script that was custom tuned for a single architecture. So it was hard to test models that had these operations on any other hardware. 2) iree_linalg_ext.winograd.input_transform https://github.com/openxla/iree/blob/2cdf1452bb2f877baf8723ab567363094bea10bd/compiler/src/iree/compiler/Dialect/LinalgExt/IR/LinalgExtOps.td#L1043 This operation was working on SPIR-V backend and CPU backend, but not on the LLVMGPU backend. Again this wasnt tested end-to-end on all backends, but it was somewhat tested on CPU and SPIR-V backends (https://github.com/openxla/iree/blob/main/tests/e2e/linalg_ext_ops/winograd_input.mlir . So it was relatively easy to get working on LLVMGPU backend 3) iree_linalg_ext.winograd.filter_transform This operation actually does not exist. The filter transform for winograd was implemented by constant folding the weights and constant filters. To support this the filters for the convolution needed to be converted from resources to inline constants and were evaluated (very slowly) at compile time. 4) iree_linalg_ext.winograd.output_transform This operation was working on SPIR-V backend and CPU backend, but not on the LLVMGPU backend. Again this wasnt tested end-to-end on all backends, but it was somewhat tested on CPU and SPIR-V backends (https://github.com/openxla/iree/blob/main/tests/e2e/linalg_ext_ops/winograd_output.mlir . So it was relatively easy to get working on LLVMGPU backend

Covered commits

Immediate next steps

1) Make iree_linalg_ext.attention work on all backends (at least CPU and LLVMGPU backend) and have them tested in CI. They should be relatively functional on different architectures, which will make them robust and easily portable.

ScottTodd commented 2 months ago

What's the latest status here? Do we want to use this as a tracking issue? A few of us are noticing and getting blocked by uneven support for these LinalgExt ops.

MaheshRavishankar commented 2 months ago

The winograd op support has been landed to a great extent. There are CPU and RoCM tests. Attention is in the process. Which ops are you having issues with?

ScottTodd commented 2 months ago

Which ops are you having issues with?

Mainly attention, but I can't tell easily and that's the larger problem. There are several inactive issues like this one and https://github.com/iree-org/iree/issues/17467 saying things are incomplete and test coverage is mixed across backends.