Closed MaheshRavishankar closed 5 months ago
It is happening here: https://github.com/iree-org/iree/blob/695e1932dd6cf91f2de5fc1415f10fe85fd269f0/compiler/src/iree/compiler/GlobalOptimization/RaiseSpecialOps.cpp#L687
Looks like it's already written for convolution and contraction ops, but it's restricted to floating point. We just need to add a case for signed integer extends also.
I'm also realizing that this pattern only works for floats because matmul_unsigned
and matmul
have different extension semantics and doing this based on ContractionOpInterface
doesn't tell us the extension semantics for the underlying named op.
So... do we close this issue since we cant actually get named ops to do this. I dont know off the top of my head how linalg matmul ops handle extension semantics. Is it possible to have opdsl generate the sign-extensions by default.
Sorry was unclear, there is no need to close this issue. I was just saying that the way that the pattern in RaiseSpecialOps is written won't work for integer extends because the extension semantics are op specific. We will have to do integer op by op, but the skeleton of the pattern I linked will still work.
Using the instructions here https://github.com/nod-ai/playbook/blob/main/HOWTO/punet.md the following sequence of IR emitted before any fusion (so before the first elementwise op fusion pass) kicks in
Folding the
extsi
into the conv would make things a lot easier. @qedawkins can you point to where these exist already for matmul ops. I couldnt find it.