Open MaheshRavishankar opened 5 months ago
What's the latest status here? Do we want to use this as a tracking issue? A few of us are noticing and getting blocked by uneven support for these LinalgExt ops.
The winograd op support has been landed to a great extent. There are CPU and RoCM tests. Attention is in the process. Which ops are you having issues with?
Which ops are you having issues with?
Mainly attention, but I can't tell easily and that's the larger problem. There are several inactive issues like this one and https://github.com/iree-org/iree/issues/17467 saying things are incomplete and test coverage is mixed across backends.
tests/e2e/linalg_ext_ops
are running on ROCm/hiptests/e2e/linalg_ext_ops
, some ops from https://iree.dev/reference/mlir-dialects/LinalgExt/ are not included and many are marked excluded on various backends (no XFAIL support there, so we won't even know if they start passing)
One of the issues faced during SDXL support (https://github.com/openxla/iree/pull/16854) was the missing support for operations added in LinalgExt on all codegen backends i.e, CPU, SPIRV and LLVMGPU.
Main Issues
1)
iree_linalg_ext.attention
https://github.com/openxla/iree/blob/2cdf1452bb2f877baf8723ab567363094bea10bd/compiler/src/iree/compiler/Dialect/LinalgExt/IR/LinalgExtOps.td#L514 The main issue here was that theTileAndDecomposeAttentionPass
is not really tested on any end-to-end compilation path. An efficient compilation of this op was built up using transform dialect script that was custom tuned for a single architecture. So it was hard to test models that had these operations on any other hardware. 2)iree_linalg_ext.winograd.input_transform
https://github.com/openxla/iree/blob/2cdf1452bb2f877baf8723ab567363094bea10bd/compiler/src/iree/compiler/Dialect/LinalgExt/IR/LinalgExtOps.td#L1043 This operation was working on SPIR-V backend and CPU backend, but not on the LLVMGPU backend. Again this wasnt tested end-to-end on all backends, but it was somewhat tested on CPU and SPIR-V backends (https://github.com/openxla/iree/blob/main/tests/e2e/linalg_ext_ops/winograd_input.mlir . So it was relatively easy to get working on LLVMGPU backend 3)iree_linalg_ext.winograd.filter_transform
This operation actually does not exist. The filter transform for winograd was implemented by constant folding the weights and constant filters. To support this the filters for the convolution needed to be converted from resources to inline constants and were evaluated (very slowly) at compile time. 4)iree_linalg_ext.winograd.output_transform
This operation was working on SPIR-V backend and CPU backend, but not on the LLVMGPU backend. Again this wasnt tested end-to-end on all backends, but it was somewhat tested on CPU and SPIR-V backends (https://github.com/openxla/iree/blob/main/tests/e2e/linalg_ext_ops/winograd_output.mlir . So it was relatively easy to get working on LLVMGPU backendCovered commits
Immediate next steps
1) Make
iree_linalg_ext.attention
work on all backends (at least CPU and LLVMGPU backend) and have them tested in CI. They should be relatively functional on different architectures, which will make them robust and easily portable.iree_linalg_ext.winograd.input_transform
andiree_linalg_ext.winograd.output_transform
on CPU and SPIR-V backend made it easy to port to LLVMGPU backendTileAndDecomposeAttentionPass
needs to be fixed. This might require re-evaluating the pass implementation to use thePartialReductionTilingOpInterface
2) Adding aniree_linalg_ext.filter_transform
operation to LinalgExt dialect.iree_linalg_ext.winograd.input_transform
andiree_linalg_ext.winograd.output_transform
ops by themselves as well as adding tests that convert a convolution into winograd and check that they work as a whole.