tenstorrent / tt-metal

:metal: TT-NN operator library, and TT-Metalium low level kernel programming model.
Apache License 2.0
475 stars 75 forks source link

[Bug Report] ttnn.reshape to 5-D does not work with TILE_LAYOUT nor on device #13889

Open kevinwuTT opened 1 month ago

kevinwuTT commented 1 month ago

Describe the bug When I try to reshape a 3-D tensor to 5-D with a tensor on device and tile layout, I get the following error:

Traceback (most recent call last):
  File "reshape.py", line 58, in <module>
    ttnn_reshape = ttnn.reshape(ttnn_from_torch, (1, 32, 16, 3, 96))
  File "/home/ubuntu/repo/tt-metal/ttnn/ttnn/decorators.py", line 329, in __call__
    return self.function(*function_args, **function_kwargs)
RuntimeError: TT_THROW @ ../ttnn/cpp/ttnn/operations/core/core.cpp:40: tt::exception
info:
Cannot use squeeze_from_4D to set the tensor to the rank of 5!

To Reproduce

import ttnn
import torch

arg0_1 = torch.rand((1, 32, 4608), dtype=torch.bfloat16)

with ttnn.manage_device(device_id=0) as device:
    ttnn_from_torch = ttnn.from_torch(arg0_1, layout = ttnn.TILE_LAYOUT, dtype = ttnn.bfloat16, device = device)
    ttnn_reshape = ttnn.reshape(ttnn_from_torch, (1, 32, 16, 3, 96))
    ttnn_to_torch = ttnn.to_torch(ttnn_reshape)

Expected behavior If I change the ttnn.from_torch call to ttnn_from_torch = ttnn.from_torch(arg0_1, layout = ttnn.ROW_MAJOR_LAYOUT, dtype = ttnn.bfloat16) then it works.

Please complete the following environment information:

ntarafdar commented 1 week ago

@kevinwuTT can you comment on the priority on this>

kevinwuTT commented 1 week ago

@ntarafdar I will say maybe medium-high. We initially got these to work by inserting pairs of to_layout from TILE to ROW_MAJOR and back. But because these are causing models to run slow, we're trying to optimize them away. So having support for these shapes in TILE_LAYOUT would be very helpful. Thanks!