llvm / llvm-project

The LLVM Project is a collection of modular and reusable compiler and toolchain technologies.
http://llvm.org
Other
29.02k stars 11.96k forks source link

Implement sparse kernel benchmarks (moderate level, independent, starter) #51650

Open aartbik opened 3 years ago

aartbik commented 3 years ago
Bugzilla Link 52308
Version unspecified
OS Linux
CC @joker-eph,@SaurabhJha

Extended Description

The sparse compiler relies on FileCheck based tests (https://github.com/llvm/llvm-project/tree/main/mlir/test/Dialect/SparseTensor) and "regression" test (https://github.com/llvm/llvm-project/tree/main/mlir/test/Integration/Dialect/SparseTensor/CPU). These tests make sure that the generated IR is as expected and that the lowering runs "end to end". Most of these tests were developed in conjunction with particular features while they were being added.

However, we don't have any benchmarks to measure the performance of the generated code (and make sure this performance does not regress by later changes to the sparse compiler).

The entry requests adding such benchmarks to MLIR, which requires (1) investigating the typical way in which LLVM at large integrates benchmarks (2) find interesting sparse kernels to implement and measure (3) integrate such tests in a continuous build (or at least frequently run system)

This can act as a good independent start (since it does not require intimate knowledge of the inner workings of MLIR's sparse compiler). The level is medium, however, since the engineering work is non-trivial.

SaurabhJha commented 2 years ago

Hi Aart,

LLVM uses google benchmark library (https://github.com/google/benchmark) for microbenchmarks. The example use I looked at in particular is libc https://github.com/llvm/llvm-project/tree/main/libc/benchmarks. We could start in the same way for MLIR by having a benchmarks directory in it and setting up Google Benchmark for it similar to https://github.com/llvm/llvm-project/blob/main/libc/benchmarks/CMakeLists.txt

Additionally, there is an external repo https://github.com/llvm/llvm-test-suite for llvm test suites which contains microbenchmarks, some example applications to test/benchmark, and external suites like SPEC (which are not included in the test-suite repo). I don't think this model of having a separate repo for tests is relevant to us for the purposes of this ticket but wanted to mention it.

Later, we would want to integrate MLIR benchmarks to Buildbot using these instructions https://llvm.org/docs/HowToAddABuilder.html which you mentioned in 3rd point of your numbered list.

Please let me know your thoughts or if you want more investigation before getting started here.

Thanks, Saurabh

aartbik commented 2 years ago

Hi Saurabh, Thanks for joining this project! We are looking forward to your contributions. Yes, point (1) is one I needed to figure our myself, so your help here would be greatly appreciated. Then indeed I have better ideas for step (2) and (3) once we know how to setup a benchmarking framework. I am really glad you are helping out with this! Aart

SaurabhJha commented 2 years ago

Hi Aart,

I would like to start on this. I have contributed to Clang before but this is going to be my first work in MLIR.

You have listed three things that we need to solve this issue. Should I start with 1) and look at how LLVM uses benchmarks and post my findings here? We can decide on the next steps after that.

Thank you, Saurabh

aartbik commented 3 years ago

assigned to @SaurabhJha

llvmbot commented 1 year ago

Hi!

This issue may be a good introductory issue for people new to working on LLVM. If you would like to work on this issue, your first steps are:

1) Assign the issue to you. 2) Fix the issue locally. 3) Run the test suite locally. 3.1) Remember that the subdirectories under test/ create fine-grained testing targets, so you can e.g. use make check-clang-ast to only run Clang's AST tests. 4) Create a git commit 5) Run git clang-format HEAD~1 to format your changes. 6) Submit the patch to Phabricator. 6.1) Detailed instructions can be found here

For more instructions on how to submit a patch to LLVM, see our documentation.

If you have any further questions about this issue, don't hesitate to ask via a comment on this Github issue.

@llvm/issue-subscribers-good-first-issue

solo-daemon commented 3 months ago

Hi @Endilll @aartbik @SaurabhJha if no ones working on this issue i would like to work on this as a way to get started with the llvm project. Can you please assign me this issue. Also @SaurabhJha can you please share your progress on this issue.

aartbik commented 3 months ago

Having pure mlir-source benchmarks has become a bit less interesting for our current effort. We are very interested in actually designing a suite in e.g. PyTorch and use our current end-to-end ML compiler based on MLIR to test performance. Is that of interest to you too?