With https://github.com/iree-org/iree/pull/17681 , the state of attention codegen for llvm-cpu is at a good enough place where we don't need to rely on a transform dialect spec (at least for llvm-cpu) anymore.
This patch removes e2e tests for the attention transform dialect spec as it adds more burden on a path that will probably not be maintained in the future. There are some e2e correctness tests in e2e/linalg_ext/ that check correctness for small test cases and pkgci running sdxl + attention tests.
With https://github.com/iree-org/iree/pull/17681 , the state of attention codegen for llvm-cpu is at a good enough place where we don't need to rely on a transform dialect spec (at least for llvm-cpu) anymore.
This patch removes e2e tests for the attention transform dialect spec as it adds more burden on a path that will probably not be maintained in the future. There are some e2e correctness tests in e2e/linalg_ext/ that check correctness for small test cases and pkgci running sdxl + attention tests.