zio / zio-http

A next-generation Scala framework for building scalable, correct, and efficient HTTP clients and servers
https://zio.dev/zio-http
Apache License 2.0
802 stars 411 forks source link

Enable benchmark monitoring with regression CI hook #2265

Closed jdegoes closed 8 months ago

jdegoes commented 1 year ago

We need JMH-based benchmarks to be run as part of CI, with automatic failure if performance on some benchmark falls below some threshold set in configuration.

jdegoes commented 1 year ago

/bounty $750

algora-pbc[bot] commented 1 year ago

πŸ’Ž $750 bounty β€’ ZIO

Steps to solve:

  1. Start working: Comment /attempt #2265 with your implementation plan
  2. Submit work: Create a pull request including /claim #2265 in the PR body to claim the bounty
  3. Receive payment: 100% of the bounty is received 2-5 days post-reward. Make sure you are eligible for payouts

Additional opportunities:

Thank you for contributing to zio/zio-http!

Add a bounty β€’ Share on socials

Attempt Started (GMT+0) Solution
πŸ”΄ @kitlangton Aug 3, 2023, 11:28:32 AM WIP
πŸ”΄ @uzmi1 Oct 28, 2023, 2:50:36 PM WIP
πŸ”΄ @nermalcat69 Dec 6, 2023, 5:49:54 PM WIP
🟒 @alankritdabral Mar 24, 2024, 2:12:38 PM WIP
kitlangton commented 1 year ago

I've been making some good progress on this in a separate repo. /attempt #2265

I'm going to make a GitHub action that will parse the JMH output and compare its performance agains past run data (serialized and stored in a separate branch). If the benchmarks fall beneath the configured threshold, it will fail CI. I'm also going to try to have it post the benchmark results as a comment on the Pull Request.

This action can be a separate zio org project if it proves useful. It should be generic enough to attach to any zio project. (Also, the action itself is written with ZIO and Scala.jsβ€”so that's fun!)

Options
kitlangton commented 1 year ago

Making some progress: CleanShot 2023-08-03 at 09 30 41@2x

Let me know if you have any design thoughts/questions :)

plokhotnyuk commented 1 year ago

Thank you for your serious attitude towards zio-http performance! It is already one from the most fastests contemporary Scala web-servers, just see results here and here.

Here is a couple of ideas for JMH benchmarking:

  1. Use gc and perfasm JMH profilers to store allocation rates as auxilary metrics and disassembled code generated by JIT for hottest places together with throughput results.
  2. Use JMH visualizer to easily see main and auxilary metrics together with their confidence range and compare them interactively using references to raw .json files on GitHub, like here

Also, my 2 cents for HTTP-server benchmarking:

  1. Measure latency for different combinations of fixed throughput rates and numbers opened of connections using wrk2. Here is a great talk of @giltene about understanding latency and measuring application responsiveness.
  2. Use async-profiler during benchmarking to see what is happening under the hood. It's shows almost all what is happening under the hood: JVM, C++, kernel stack frames, virtual and interface calls (vtable and itable). If results are stored to .jfr format then they could be converted to Netflix's flamescope format and then be browsed interactively with 10ms granularity to observe different mode of server working (warming up, GC-ing, etc.).
kitlangton commented 1 year ago

UPDATE: I've created the following JMH Benchmark Action repository.

CleanShot 2023-08-04 at 12 08 55@2x

One bigger concern is that these benchmarks take a good deal of time to runβ€”even on a relatively powerful M2 Mac. Doing this in CI, even configured with fewer iterations/forks, will take a good deal of time. One option would be to only run this action if there's a benchmark label added to the PR? Then, the maintainers can opt-in to benchmarks when it seems relevant to the work being done. (This is something that can be done with a few lines in the workflow yaml, so it doesn't have to be part of the main action)

UPDATE: And, as usual, the complexity unfurls itself as you approach the end. It turns out it's not as simple to merely "comment on a pull request" as it first appeared (more info here: https://github.com/zio/zio-http/pull/2369). But I have spotted a workaround.

Another thought from that PR: There's a lot of variance in certain very high ops/s benchmarks, so I should probably take the standard deviation into account when attempting to identify a regression, instead of just naively comparing the final scores.

kitlangton commented 1 year ago

/attempt #2265

Options
kitlangton commented 1 year ago

Alrighty. A summary of open design questions:

Ahmadkhan02 commented 1 year ago

@kitlangton are you still on this or can I make an attempt?

uzmi1 commented 1 year ago

/claim #2502

uzmi1 commented 1 year ago

/attempt #2265

Options
uzmi1 commented 1 year ago

Hi Jdegoes- check solution- Bug Description: The current implementation lacks benchmark monitoring, and there is no CI hook for regression testing. This creates a gap in performance monitoring, potentially leading to undetected regressions and performance issues. The absence of benchmark monitoring makes it challenging to identify changes that negatively impact system performance.

Impact:

Undetected Performance Regressions:

Without benchmark monitoring, performance regressions may go unnoticed, leading to degraded system performance. Missing Continuous Integration (CI) Hook:

Lack of a CI hook for regression testing means changes in the codebase may not undergo performance testing during the CI/CD pipeline. Steps to Reproduce:

Inspect Current Monitoring Setup:

Observe the absence of benchmark monitoring in the current system. Verify that there is no CI hook for regression testing related to performance. Attempt to Enable Benchmark Monitoring:

Explore the system configuration or relevant scripts to enable benchmark monitoring. Check for existing CI hooks related to performance. Verify Implementation:

Execute benchmark monitoring after attempting to enable it. Check if the CI hook triggers regression testing for performance-related changes. Expected Behaviour: 1.Benchmark Monitoring Enabled:

After the task is completed, benchmark monitoring should be active, capturing relevant performance metrics. CI Hook for Regression Testing: A CI hook should be in place to trigger regression testing for performance-related changes in the codebase. Suggested Solution:

Benchmark Monitoring:

Integrate a suitable benchmark monitoring tool or solution into the system configuration. Configure the monitoring tool to capture relevant performance metrics. CI Hook for Regression Testing:

Implement a CI hook that triggers regression testing for performance-related changes. Integrate the CI hook into the existing CI/CD pipeline. Code Implementation Example:

Example CI/CD Configuration (GitLab CI)

stages:

benchmark: stage: test script:

Ensure the selected benchmark monitoring tool aligns with system requirements. Regularly review and update the benchmark metrics being monitored to reflect evolving performance expectations. Reported by: Uzma Qureshi

Proof of Concept simple proof of concept (PoC) to enable benchmark monitoring. Note that this is a generic example, and you may need to customize it based on your specific environment and the benchmark monitoring tool you choose.

Assuming you are using a Unix-like system and want to integrate Apache Benchmark (ab) for benchmarking, here's a basic script:

run_benchmarks.sh:

!/bin/bash

Set variables

TARGET_URL="http://your-api-endpoint.com/" BENCHMARK_RESULTS_FILE="benchmark_results.txt"

Run Apache Benchmark (ab)

ab -n 100 -c 10 $TARGET_URL > $BENCHMARK_RESULTS_FILE

Print benchmark results

cat $BENCHMARK_RESULTS_FILE

This script does the following:

It sends 100 requests (-n 100) with a concurrency of 10 (-c 10) to the specified API endpoint ($TARGET_URL). The benchmark results are saved in a file named benchmark_results.txt. The script then prints the benchmark results to the console. Remember to replace "http://your-api-endpoint.com/" with the actual URL you want to benchmark.

algora-pbc[bot] commented 1 year ago

@uzmi1: Reminder that in 7 days the bounty will become up for grabs, so please submit a pull request before then πŸ™

uzmi1 commented 1 year ago

/claim #2265

algora-pbc[bot] commented 1 year ago

The bounty is up for grabs! Everyone is welcome to /attempt #2265 πŸ™Œ

nermalcat69 commented 12 months ago

/attempt #2265

Options
algora-pbc[bot] commented 11 months ago

@nermalcat69: Reminder that in 7 days the bounty will become up for grabs, so please submit a pull request before then πŸ™

algora-pbc[bot] commented 11 months ago

The bounty is up for grabs! Everyone is welcome to /attempt #2265 πŸ™Œ

alankritdabral commented 8 months ago

After digging little bit i found few flaws in the build :

  1. The path to benchmark files is outdated which results in no jmh bechmarks in ci.yml .
  2. The jdk version 8 in ci.yml results error during jmh run.
  3. The UtilBenchmark runs in avgt mode unlike others so using grep "thrpt" will throw error.
alankritdabral commented 8 months ago

/attempt 2265

Algora profile Completed bounties Tech Active attempts Options
@alankritdabral 2 bounties from 2 projects
Cancel attempt
alankritdabral commented 8 months ago

Currently, we're running each benchmark in parallel for both the current branch and the base branch, which doubles the time required. The approach I'm considering is to run the base benchmark with each push to the main branch and save its artifact. During a pull request run, we'll execute the benchmarks for the current branch, download the base artifacts, compare the current benchmarks with the base benchmarks using a shell script, and upload the results simultaneously. If the benchmarks exceed a certain threshold, we will break the CI.

I will divide the task into two PRs.

  • [x] Firstly, run benchmarks on push to the main branch only and save them as cache. #2750
  • [ ] Secondly, run benchmarks on each pull request and compare its results with the base benchmarks and show the difference. #2751
alankritdabral commented 8 months ago

@jdegoes I have created a pull request for the current issue pull_request. Hope you find this pr useful. :smile: :

alankritdabral commented 8 months ago

hey @jdegoes #2751 would completely close this issue as i have i have divided the solution in two pr as stated in the above comment can you review; it its in working state.