Open dgolubovicTT opened 1 week ago
fyi @tt-mpantic
@dgolubovicTT for quicker and easier repro, can you provide the TTNN test code as part of the issue description (example).
Also, TTNN folks have their templates for opening up bug issues. In order to make their life a bit easier, let's try to use those template fields for information we currently posses (not all are mandatory).
Describe the bug
for tensor shape (1,32,12,100) transpose doesn't work for -2, -3 dimensions. Namely, it throws an error that shape of the tensor must be divisible by tile dim. It seems that padding is not done in this case.
As I looked into implementation of transpose I have noticed the difference: In
ttnn/cpp/ttnn/operations/data_movement/transpose/transpose.cpp
when padding is HC (-2,-3) padding is only done on C.To Reproduce
tt-metal commit: 96fd0df449931e9c4958e761f40e0b8550c3e2c1
Expected behaviour Transpose op should work regardless of the divisibility by tile dimensions.
To give a little context, we need this transpose case for Llama bringup on forge. This transpose case appears in self attention graph, and we need to support it.