Closed sdjordjevicTT closed 1 week ago
Only input_tensor_b can be broadcasted. That leads to some interesting corner cases (which the documentation processing has cut off, although that will need to be fixed separately), as well as not allowing input_tensor_b to be broadcasted.
I will need to update the documentation.
Re: which the documentation processing has cut off, although that will need to be fixed separately Turns out that https://docs.tenstorrent.com/ttnn/latest/ttnn/ttnn/matmul.html is really out of date and does not reflect existing documentation.
Updated doc string with this scenario.
Doc string updated via PR https://github.com/tenstorrent/tt-metal/pull/13071
Describe the bug TTNN Matmul op does not work when the batch dim of input_tensor_a needs to be broadcasted. According to the public documentation, this kind of product is supported. I am consistently encountering an error when trying to execute this specific operation.
To Reproduce Steps to reproduce the behavior:
Error once the above code is executed:
Expected behavior As stated in the public docs, in this particular case, the a input should be broadcasted from (1, 128, 2048) to (7, 128, 2048) and the matrix product executed.
Screenshots N/A
Please complete the following environment information:
Additional context N/A