Closed AsakusaRinne closed 3 months ago
Thank you for your interest in the buddy compiler. We are currently developing an E2E compilation path from DSL (Domain-Specific Language) to DSA (Domain-Specific Architecture). Our approach includes standalone DSLs, graph-level representations, configuration-based optimization mechanisms, and software-hardware co-design for DSAs, which sets us apart from other AI compilers.
As a research team, our open-source version may lag behind our lab versions. According to the publication process of our research papers, we plan to gradually release our optimization and tuning frameworks, DSL frameworks, unified programming models, and co-design frameworks in the future.
In terms of performance, our lab version is still 3-5 times behind the SOTA frameworks. We recognize that there is still room for optimization, and our efforts to enhance performance are ongoing😄
Thank you very much for your answer!
Is your feature request related to a problem? Please describe.
Hi all, it seems to be an interesting and hard-core work of buddy-mlir. However after reading some docs about it, I couldn't identify the advantages compared with other AI frameworks. Is there any plan to provide some speed benchmark data on popular models now, such as
LLaMA
,LLaVA
andDiT
?I'm not expecting buddy-mlir to beat other frameworks now but I think it's important for one to decide whether start using it or participating in its development.
Thank you for all your hard work.
Describe the solution you'd like
Some speed benchmark data on popular models.
Describe alternatives you've considered
Other metrics, such as memory usages.