Closed manbearian closed 7 months ago
oh... I literally did the same thing last night over at nhat/fix_nightly
:D I should have put up the PR earlier to save you the time from doing this. Looks good to me. The pipeline will still fail after this unfortunately due to this commit last night when I tested it: https://github.com/openai/triton/commit/72c983392749d31fef953beae5d983db9d778b41
The pipeline keeps failing because git can't checkout the repo at 9f4ad17c22b889d3804d1a1c38ed79b0e936e82
. This commit is also ahead of https://github.com/openai/triton/commit/72c983392749d31fef953beae5d983db9d778b41 which breaks the cpu backend too. Let's use this [BACKEND] Refactor wgmma descriptor creation (https://github.com/openai/triton/pull/2725[)](https://github.com/openai/triton/commit/56c284cf7e39f249cdf1d8d5dba7892deb0286d6), it's right before the backend plugin breaking change.
@nhat-nguyen next time assign the issue to yourself when you start to take a look :)
Looks like upstream Triton updated their LLVM reference. The change llvm/llvm-project#71010 removed
memref.tensor_store
as being redundant. I've updated our refernences to memremf.tensor_store with bufferization.materialize_in_destination as per that PR.