Open amalbasaTT opened 1 week ago
@amalbasaTT The issue is in the shape that are being generated as part of the scripts. Please find the image below that throws TT_FATAL
Shapes are all valid, all of test cases throw assertion errors, not fatal errors. All of test tensors are of rank 3 and 4 and scripts generate them in such way that when that tensor is coverted to 2d (as is the case when sharding a tensor), it's dimensions are all divisible by 32.
Let me do fresh build and test it
Describe the bug ttnn.tril gives low PCC when using sharded strategies and when input tenor (of rank 3 or 4) has second to innermost dimension not divisible by 32. Problem has been observed on Wormhole_B0.
To Reproduce
from tests.sweep_framework.sweep_utils.utils import gen_shapes, get_device_grid_size, get_sharded_config from tests.tt_eager.python_api_testing.sweep_tests.generation_funcs import gen_func_with_cast_tt, _gen_reshape_args_from_volume from tests.ttnn.utils_for_testing import check_with_pcc from models.utility_functions import torch_random
Y, X = get_device_grid_size() DEVICE_GRID_SIZE = ttnn.CoreGrid(y=Y, x=X)
def gen_test_sweep_args(gen_unsafe, num_shapes, shard_orientation, sharding_strategy=None): if sharding_strategy: assert sharding_strategy in ["block", "height", "width"]
def run_tril_sharded_tests( input_shape, dtype, dlayout, mem_cfg, data_seed, device, ): torch.manual_seed(data_seed)
test_sweep_args = (list(gen_test_sweep_args(True, 2, "row_major", "block"))
@pytest.mark.parametrize( "input_shape, dtype, dlayout, mem_cfg, data_seed", (test_sweep_args), ) def test_tril_sharded(input_shape, dtype, dlayout, mem_cfg, data_seed, device): run_tril_sharded_tests(input_shape, dtype, dlayout, mem_cfg, data_seed, device)
pytest test_tril_sharded.py