Metalium's implementation seems to behave a little different than pytorch's - there's some extra dimensions getting added somewhere, so a tensor that's supposed to be [32, 32] turns into [32, 1, 1, 32]; this causes an assert which expects a tile divisible dim(-2) since default layout is TILE
embedding_non_tile.mlir is marked UNSUPPORTED
tensor with non-tile aligned dims, currently failing in ttrt (ttnn) as we're not handling layouts correctly
embedding_1d_tensor.mlir is marked XFAIL
seems to fail due to a bug in MLIR, opened #471 to track
simple_embedding.mlir
is markedUNSUPPORTED
[32, 32]
turns into[32, 1, 1, 32]
; this causes an assert which expects a tile divisible dim(-2) since default layout is TILEembedding_non_tile.mlir
is markedUNSUPPORTED
embedding_1d_tensor.mlir
is markedXFAIL