Though a majority of the cases where dimensions become static happen early in the pipeline there are some that happen after lowering into flow and stream.tensor.*. It'd be really nice to be able to propagate this static information before lowering into the stream dialects.
These come from consuming tensors whose shape may be knowable:
There's a few approaches here with the most robust being to make the Util_ShapeAwareOp interface support recreating the op with new static shape dimensions. A canonicalization pattern registered on the interface could then check operands/results for cases where more static information is available and recreate the op with that. Another approach would be to expose mutable fields on the interface but that gets messy - there's only a dozen ops and it'd be easier to just rebuild them.
Ideally we'd then end up with something that turned the casts into clones:
Since in tensor form the cast may be forking an immutable tensor value the clone preserves the behavior (for example if there were two cast ops consuming the same tensor) - clone canonicalizers can then kick in and handle the rest of the cleanup.
Examples of where this can arise and not be caught earlier are any IPO we do (across globals/function/branch boundaries), const eval that ends up introducing more static information, or specialization where we branch off paths that are say size 1 and size N and want to optimize the size 1 half. #8441 would also benefit as we can get more static 0's and kill more ops.
Though a majority of the cases where dimensions become static happen early in the pipeline there are some that happen after lowering into
flow
andstream.tensor.*
. It'd be really nice to be able to propagate this static information before lowering into the stream dialects.These come from consuming tensors whose shape may be knowable:
->
And can also happen with consumers:
I'm not sure there's anything today that replaces the dim if there's a subsequent cast, but that'd be useful!:
There's a few approaches here with the most robust being to make the
Util_ShapeAwareOp
interface support recreating the op with new static shape dimensions. A canonicalization pattern registered on the interface could then check operands/results for cases where more static information is available and recreate the op with that. Another approach would be to expose mutable fields on the interface but that gets messy - there's only a dozen ops and it'd be easier to just rebuild them.Ideally we'd then end up with something that turned the casts into clones:
->
and
->
Since in tensor form the cast may be forking an immutable tensor value the clone preserves the behavior (for example if there were two cast ops consuming the same tensor) - clone canonicalizers can then kick in and handle the rest of the cleanup.
Examples of where this can arise and not be caught earlier are any IPO we do (across globals/function/branch boundaries), const eval that ends up introducing more static information, or specialization where we branch off paths that are say size 1 and size N and want to optimize the size 1 half. #8441 would also benefit as we can get more static 0's and kill more ops.