Closed sdjordjevicTT closed 1 week ago
a_tt and b_tt are not vectors:
>>> print(a_tt)
ttnn.Tensor([[ 0.45627, 0.99778, ..., 0.00000, 0.00000],
[ 0.52836, 0.80999, ..., 0.00000, 0.00000],
...,
[ 0.08708, 0.32091, ..., 0.00000, 0.00000],
[ 0.64868, 0.49321, ..., 0.00000, 0.00000]], shape=Shape([1[32], 2048]), dtype=DataType::FLOAT32, layout=Layout::TILE)
>>> print(b_tt)
ttnn.Tensor([[ 0.45481, 0.30029, ..., 0.00000, 0.00000],
[ 0.68751, 0.74413, ..., 0.00000, 0.00000],
...,
[ 0.17147, 0.75279, ..., 0.00000, 0.00000],
[ 0.77277, 0.36038, ..., 0.00000, 0.00000]], shape=Shape([1[32], 2048]), dtype=DataType::FLOAT32, layout=Layout::TILE)
I think you would need to use something like e.g. ttnn.ones([256])
However, that also leads to an exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/proj_sw/user_dev/bbradel/tt-metal/ttnn/ttnn/decorators.py", line 326, in __call__
return self.function(*function_args, **function_kwargs)
RuntimeError: TT_FATAL @ ../ttnn/cpp/ttnn/tensor/types.cpp:170: normalized_index >= 0 and normalized_index < rank
info:
Index is out of bounds for the rank, should be between 0 and 0 however is 18446744073709551615
backtrace:
--- /proj_sw/user_dev/bbradel/tt-metal/ttnn/ttnn/_ttnn.so(+0x173783b) [0x7f71e25e383b]
--- tt::tt_metal::LegacyShape::get_normalized_index(long) const
--- tt::tt_metal::LegacyShape::operator[](long) const
--- /proj_sw/user_dev/bbradel/tt-metal/ttnn/ttnn/_ttnn.so(+0x505023) [0x7f71e13b1023]
--- ttnn::operations::matmul::bound_matmul(tt::tt_metal::Tensor const&, tt::tt_metal::Tensor const&, std::__1::optional<tt::tt_metal::Tensor const> const&, ttnn::operations::matmul::Matmul const&, unsigned char const&)
--- /proj_sw/user_dev/bbradel/tt-metal/ttnn/ttnn/_ttnn.so(_ZN4ttnn10operations6matmul15MatmulOperation6invokeERKN2tt8tt_metal6TensorES7_bbNSt3__18optionalIKNS4_12MemoryConfigEEENS9_IKNS4_8DataTypeEEENS9_IKNS8_7variantIJNS1_28MatmulMultiCoreProgramConfigENS1_45MatmulMultiCoreNonOptimizedReuseProgramConfigENS1_33MatmulMultiCoreReuseProgramConfigENS1_42MatmulMultiCoreReuseMultiCastProgramConfigENS1_44MatmulMultiCoreReuseMultiCast1DProgramConfigENS1_53MatmulMultiCoreReuseMultiCastDRAMShardedProgramConfigEEEEEERKNS9_IKNS8_12basic_stringIcNS8_11char_traitsIcEENS8_9allocatorIcEEEEEENS9_IKNSG_IJNS_28GrayskullComputeKernelConfigENS_27WormholeComputeKernelConfigEEEEEENS9_IKNS_5types8CoreGridEEE+0x26f) [0x7f71e23aad8f]
--- /proj_sw/user_dev/bbradel/tt-metal/ttnn/ttnn/_ttnn.so(+0x294d23c) [0x7f71e37f923c]
--- /proj_sw/user_dev/bbradel/tt-metal/ttnn/ttnn/_ttnn.so(+0x294bc12) [0x7f71e37f7c12]
I will investigate this later one.
I looked at the old code. We never supported this.
I also tried to implement a workaround by calling other ops to get the right shape. There are currently too many issues with the other ops for that to work.
I will just update the documentation.
Updated doc string to indicate inputs need to have at least 2 dimensions.
Doc string updated via PR https://github.com/tenstorrent/tt-metal/pull/13071
Describe the bug TTNN matmul op does not support vector-vector product. According to the public documentation, vector-vector product is supported, but it should return a tensor with non-empty shape. However, when I attempt to run the vector-vector product, I encounter an error.
To Reproduce Steps to reproduce the behavior:
Error once the above code is executed:
Expected behavior The expected behavior of the vector-vector product should return the scalar value as stated in the public documentation.
Screenshots N/A
Please complete the following environment information:
Additional context N/A