Open utterances-bot opened 2 years ago
"The purpose of these dialects to to faithfully represent the source model for the specific framework"
I think it should be "The purpose of these dialects is to faithfully represent the source model for the specific framework"
"The linalg dialect can operate on both tenors(=>tensors) and buffers."
@AmosChenYQ: thanks for pointing out! Fixed.
Thanks to this awesome post! I am a newbie to this area and have a question regarding the lower-level part. Majority of the content (linalg/memref/vector/scf/cf) are device-agnostic, but I noticed that there are dialects like gpu/nvgpu/avx512. When are these dialects supposed to kick in? Also, when LLVM IR and SPIRV are the only two lower-end ports, I wonder if a LLVM IR generated for CPU target would also work on a GPU back-end? If not, why not?
So, MHLO is for Tensorflow, TOSA for others like torch?
MLIR CodeGen Dialects for Machine Learning Compilers | Lei.Chat()
The initial blog post in this series captured my overall take on the evolution trends of compilers and IRs. It also touched on LLVM IR, SPIR-V, and MLIR, explaining the problems they are addressing and design focuses thereof. Today I will expand on MLIR and talk about its dialect hierarchy for machine learning (ML) compilers systematically.
https://www.lei.chat/posts/mlir-codegen-dialects-for-machine-learning-compilers/