Open jerrysky3 opened 2 months ago
We also found ttnn.max(input_tensor)
always return 0 even if all values < 0
@jerrysky3 could you please provide one or more concrete examples of calling ttnn.max for the last case ("When the output shape is (x, ttnn.TILE_SIZE * y) where x % ttnn.TILE_SIZE != 0")?
Do you mean something like
def test_max3(device):
x = -torch.ones((3, 1, 3, 64), dtype=torch.float32)
x_tt = ttnn.from_torch(x, dtype=ttnn.bfloat16, layout=ttnn.TILE_LAYOUT, device=device)
keepdim = False # True
y_tt = ttnn.max(x_tt, dim=0, keepdim=keepdim)
y = ttnn.to_torch(y_tt)
z = torch.max(x, dim=0, keepdim=keepdim).values
print(f'x {x}\ny_tt {y_tt}\nz {z}')
assert_with_pcc(z, y)
When trying to convert
aten.amin/amax
tottnn.min/max
, we ran into the issues below with tt-metal and currently don't support them:keepdim = false
isn't supported in tt-metalUnsupported dim
inside tt-metal(1, 1)
output shape instead of the expected(1)
(..., x, y)
input shape wherex or y % ttnn.TILE_SIZE != 0
. In this caseTILE_LAYOUT
pads the input tensor with 0 which gives out incorrect results in min/max operation. It should padsinf/-inf
corresponding tottnn.min/max
(x, ttnn.TILE_SIZE * y)
wherex % ttnn.TILE_SIZE != 0
, tt-metal tries to reshape the padded output back to the expected output shape and results in assertion failureUnable to reshape a tensor in TILE_LAYOUT to non-tile height and width! Please convert the tensor to ROW_MAJOR_LAYOUT first.