In our use case it is sometimes useful to have identity pack ops, specifically if we don't want to pack any dimensions of an operand, but still want to set up a new tensor for bufferization.
The use case I have for this is channel-first convolution. This PR is not strictly necessary because I have an alternative (packing input and output channel dimensions with size 1) but IMO this is maybe neater.
The function
linalg::pack
doesn't create rank-preserving pack ops, see: https://github.com/llvm/llvm-project/blob/644899addd8fd789c93e9a0f0727d37eb1b29c55/mlir/lib/Dialect/Linalg/Transforms/Transforms.cpp#L542The upstream design cannot easily be undone, because it is thoroughly tested for. See: https://github.com/llvm/llvm-project/blob/644899addd8fd789c93e9a0f0727d37eb1b29c55/mlir/test/Dialect/Linalg/transform-op-pack.mlir
In our use case it is sometimes useful to have identity pack ops, specifically if we don't want to pack any dimensions of an operand, but still want to set up a new tensor for bufferization.
The use case I have for this is channel-first convolution. This PR is not strictly necessary because I have an alternative (packing input and output channel dimensions with size 1) but IMO this is maybe neater.
If there's general approval I'll add tests.