antiagainst / antiagainst.github.io

Generated website for my personal blog
https://lei.chat/
6 stars 0 forks source link

MLIR CodeGen Dialects for Machine Learning Compilers | Lei.Chat() #12

Open utterances-bot opened 1 year ago

utterances-bot commented 1 year ago

MLIR CodeGen Dialects for Machine Learning Compilers | Lei.Chat()

The initial blog post in this series captured my overall take on the evolution trends of compilers and IRs. It also touched on LLVM IR, SPIR-V, and MLIR, explaining the problems they are addressing and design focuses thereof. Today I will expand on MLIR and talk about its dialect hierarchy for machine learning (ML) compilers systematically.

https://www.lei.chat/posts/mlir-codegen-dialects-for-machine-learning-compilers/

AmosChenYQ commented 1 year ago

"The purpose of these dialects to to faithfully represent the source model for the specific framework"

I think it should be "The purpose of these dialects is to faithfully represent the source model for the specific framework"

AmosChenYQ commented 1 year ago

"The linalg dialect can operate on both tenors(=>tensors) and buffers."

antiagainst commented 1 year ago

@AmosChenYQ: thanks for pointing out! Fixed.

zobeideThePlayer commented 2 months ago

Thanks to this awesome post! I am a newbie to this area and have a question regarding the lower-level part. Majority of the content (linalg/memref/vector/scf/cf) are device-agnostic, but I noticed that there are dialects like gpu/nvgpu/avx512. When are these dialects supposed to kick in? Also, when LLVM IR and SPIRV are the only two lower-end ports, I wonder if a LLVM IR generated for CPU target would also work on a GPU back-end? If not, why not?

FengJungle commented 1 month ago

So, MHLO is for Tensorflow, TOSA for others like torch?