openshift / cluster-kube-apiserver-operator

The kube-apiserver operator installs and maintains the kube-apiserver on a cluster
Apache License 2.0
74 stars 159 forks source link

WIP: bindata/alerts/slo: improve burnrate calculation #1744

Open dgrisonnet opened 2 months ago

dgrisonnet commented 2 months ago

The problem that I recently noticed with the existing expression is that when we compute the overall burnrate from write and read requests, we take the ratio of successful read requests and we sum it to the one of write requests. But both of these ratios are calculated against their relevant request type, not the total number of requests. This is only correct when the proportion of write and read requests is equal.

For example, let's imagine a scenario where 40% of requests are write requests and their success during a disruption is only 50%. Whilst for read requests we have 90% of success.

apiserver_request:burnrate1h{verb="write"} would be equal to 2/4 and apiserver_request:burnrate1h{verb="read"} would be 1/6. The sum of these as these by the alert today would be equal to 2/4+1/6=2/3 when in reality, the ratio of successful requests should be 2/10*1/10=3/10. So there is quite a huge difference today when we don't account for the total number of requests.

The only problem we will face with this change is that the we won't be able to use the recording rules to setup different SLOs depending on the type of requests. But this could always be addressed by changing the burn rate alert expression to the following instead of modifying the recording rules:

        sum(
          apiserver_request:burnrate1h{verb="read"}
          *
          (
            sum by (cluster) (rate(apiserver_request_total{job="apiserver",verb=~"LIST|GET"}[1h]))
            /
            sum by (cluster) (rate(apiserver_request_total{job="apiserver"}[1h]))
          )
          +
          apiserver_request:burnrate1h{verb="write"}
          *
          (
            sum by (cluster) (rate(apiserver_request_total{job="apiserver",verb=~"POST|PUT|PATCH|DELETE"}[1h]))
            /
            sum by (cluster) (rate(apiserver_request_total{job="apiserver"}[1h]))
          )
        ) > (14.40 * 0.01000)
openshift-ci[bot] commented 2 months ago

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: dgrisonnet

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files: - ~~[OWNERS](https://github.com/openshift/cluster-kube-apiserver-operator/blob/master/OWNERS)~~ [dgrisonnet] Approvers can indicate their approval by writing `/approve` in a comment Approvers can cancel approval by writing `/approve cancel` in a comment
vrutkovs commented 2 months ago

/cc

openshift-ci[bot] commented 1 month ago

@dgrisonnet: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-aws-ovn-upgrade 56d01d8b784c9f32d3a63ded2206e96e06469c7a link true /test e2e-aws-ovn-upgrade
ci/prow/e2e-aws-ovn-serial 56d01d8b784c9f32d3a63ded2206e96e06469c7a link true /test e2e-aws-ovn-serial
ci/prow/e2e-gcp-operator-single-node 56d01d8b784c9f32d3a63ded2206e96e06469c7a link false /test e2e-gcp-operator-single-node
ci/prow/e2e-aws-operator-disruptive-single-node 56d01d8b784c9f32d3a63ded2206e96e06469c7a link false /test e2e-aws-operator-disruptive-single-node

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository. I understand the commands that are listed [here](https://go.k8s.io/bot-commands).
vrutkovs commented 1 month ago

That makes sense to me, other burnrates (burnrate6h etc.) should be updated as well