Open lplewa opened 1 month ago
Migrate to Google Benchmark:
- Offers more features and is "an industry standard".
- Similar to
GTEST
, which is already in use.- A lot of features which we would to implement while sticking to ubench, is included out of the box.
I used nanobench before, and found it very easy to use and quick. Way better than the colossal beast that is google benchmarks.
L0, OpenCL and UR use the benchmark tooling in compute-benchmarks. The benefit there is that everything is in one place and all the results across all the different projects use the same format.
Categories of Performance Tests
Performance tests can be divided into two main categories:
I intend to begin with Artificial Benchmarks but i'm open to feedback on this approach.
1. Artificial Benchmarks
Objective: Create controlled benchmarks to evaluate UMF configurations under various workloads.
Current Status:
ubench
framework, which has limited functionality.Proposal:
GTEST
, which is already in use.2. Real Use Benchmarks
Objective: Benchmark UMF in real-world applications to assess performance in practical scenarios.
Current Need:
Performance Testing Framework
We plan to employ GitHub Action Benchmark to automate performance testing.
Features:
Testing Strategy
main
branch.main
branch.Next Steps
To implement this performance testing plan, I will begin by migrating existing benchmarks from
ubench
to Google Benchmark. And integration GitHub Action Benchmark with our GHa CI/CD. When this will be complied, we will start extending list of artificial benchmarks, along this identifying Real Use one.Along with this performance testing task we are planning to introduce CTL. CTL is an interface for examination and modification - it will be useful to read some internal statistic from providers/pools, which can be used as additional performance counters. More details about ctl will be provided in the separate issue.
Feedback Requested
We welcome any input on the following: