Open hschoi4448 opened 5 months ago
@tt-aho @eyonland and @jliangTT Not able to check -0.0
in both GS
and WHB0
. How do we proceed with this? Suggestions will be helpful for us to proceed further.
Is this issue still ongoing? Or is it an issue that cannot be addressed by TT? @tt-aho @eyonland @jliangTT @umadevimcw @razorback3
There are hardware limitations that prevent -0.0 being handled. If there are use cases that cannot be handled with the current op, we need to identify them and create new ops to handle those special cases. See comments on #8944
Status: There are hardware limitations that prevent -0.0 being handled. If there are use cases that cannot be handled with the current op, we need to identify them and create new ops to handle those special cases. See comments on https://github.com/tenstorrent/tt-metal/issues/8944
Describe the bug A clear and concise description of what the bug is.
To Reproduce Steps to reproduce the behavior:
SPDX-License-Identifier: Apache-2.0
import torch import pytest import tt_lib from tests.tt_eager.python_api_testing.unit_testing.backward_ops.utility_funcs import data_gen_pt_tt, compare_results
import ttnn from tests.tt_eager.python_api_testing.sweep_tests import ( pytorch_ops, tt_lib_ops ) from tests.ttnn.python_api_testing.sweep_tests import ttnn_ops
def data_gen_pt_tt(input_shapes, device, required_grad=False, val=1): pt_tensor = (torch.ones(input_shapes, requires_grad=required_grad) * val).bfloat16() tt_tensor = ( tt_lib.tensor.Tensor(pt_tensor, tt_lib.tensor.DataType.BFLOAT16).to(tt_lib.tensor.Layout.TILE).to(device) ) return pt_tensor, tt_tensor
@pytest.mark.parametrize( "input_shapes", ( (torch.Size([1, 1, 32, 32])), ), ) def test1(input_shapes, device): val = float('0') in_data, input_tensor = data_gen_pt_tt(input_shapes, device, True, val=val)
Expected behavior A clear and concise description of what you expected to happen.
Screenshots If applicable, add screenshots to help explain your problem.
Please complete the following environment information:
Additional context Add any other context about the problem here.