Each of such usage should be replaced with a ttnn analog. For example repeat --> ttnn::repeat.
This should be done for each operation.
Missing operations should be added to ttnn.
usually there are lots of usage in composite_ops.cpp, complex_ops.cpp, backward_ops.cpp
tt_eager --> ttnn per op plan
We propose next order to breakdown this work into smaller pieces:
Replacing usage in C++
Each of such usage should be replaced with a ttnn analog. For example
repeat
-->ttnn::repeat
. This should be done for each operation. Missing operations should be added to ttnn.Replacing usage in Python
For every unary op, look for next entries in Tests/Sweeps, Demos, Models, Examples:
ttl.tensor.repeat
tt_lib.tensor.repeat
ttnn.primary.tensor.repeat
and replace them withttnn.repeat
. Example⚠️ tt_lib operations might sometimes have a slightly different interface
Testing
For the best coverage, I recommend to run these workflows. If some of them fails, check if it is the same fail as on main: