clarkmcc / cel-rust

Common Expression Language interpreter written in Rust
https://crates.io/crates/cel-interpreter
MIT License
373 stars 21 forks source link

Profile-Guided Optimization (PGO) benchmark report #83

Closed zamazan4ik closed 2 weeks ago

zamazan4ik commented 2 weeks ago

Hi!

I decided to test the Profile-Guided Optimization (PGO) technique to optimize the library performance. For reference, results for other projects are available at https://github.com/zamazan4ik/awesome-pgo . Since PGO has helped many different libraries, I decided to apply it to cel-rust to see if a performance win (or loss) can be achieved. Here are my benchmark results.

This information can be interesting for anyone who wants to achieve more performance with the library in their use cases.

Test environment

Benchmark

For PGO optimization I use cargo-pgo tool. Release bench results I got with taskset -c 0 cargo bench command. The PGO training phase is done with taskset -c 0 cargo pgo bench, PGO optimization phase - with taskset -c 0 cargo pgo optimize bench.

taskset -c 0 is used to reduce the OS scheduler's influence on the results. All measurements are done on the same machine, with the same background "noise" (as much as I can guarantee).

Results

I got the following results:

According to the results, PGO measurably improves the library's performance.

Further steps

At the very least, the library's users can find this performance report and decide to enable PGO for their applications if they care about the library's performance in their workloads. Maybe a small note somewhere in the documentation (the README file?) will be enough to raise awareness about this possible performance improvement.

Please don't treat the issue like an actual issue - it's just a benchmark report (since Discussions are disabled for the repo).

Thank you.

clarkmcc commented 2 weeks ago

Perhaps I'm misreading the benchmarks but I see "Performance has regressed" in almost all cases when looking at your comparison between PGO and default. How should I interpret these results?

zamazan4ik commented 2 weeks ago

Perhaps I'm misreading the benchmarks but I see "Performance has regressed" in almost all cases when looking at your comparison between PGO and default. How should I interpret these results?

Yeah, I need to explain a bit. You need to read the "PGO optimized to Release" results - these are the results after applying PGO optimization compared to the Release. "PGO instrumented compared to Release" are shown just for reference - these are the results from the PGO training phase.

PGO is a two-step process:

Since collecting metrics in runtime has some runtime overhead - that's during the instrumentation phase performance is regressed. However, I show this information just for estimation of how performance can regress during the training phase (can be important for someone who wants to perform PGO instrumentation directly in the prod environment)