kubernetes / perf-tests

Performance tests and benchmarks
Apache License 2.0
893 stars 528 forks source link

Define and implement scheduling latency SLO #1500

Open wojtek-t opened 4 years ago

wojtek-t commented 4 years ago

@ahg-g - if you can provide more details based on our internal work

vamossagar12 commented 4 years ago

@wojtek-t is this something that I can pick up?

ahg-g commented 4 years ago

I will provide some details early next week.

vamossagar12 commented 4 years ago

hi... Would it be possible to provide the details?

ahg-g commented 3 years ago

@wojtek-t I am wondering if we should expose a new metric similar to the ones proposed in pod resource metrics that reports pod-level latency metrics instead of relying on the aggregated histogram metrics we currently have. Such metric should make it a lot easier to implement various eligibility criteria. Let me raise that on the KEP.

wojtek-t commented 3 years ago

Hmm - can we afford a metric stream per pod? We can have 150k pods in the cluster...

vamossagar12 commented 3 years ago

@ahg-g , just wanted to know would you be creating a KEP for this or is it something still that is under discussion?

ahg-g commented 3 years ago

Eligibility Criteria

The scheduling latency depends on multiple external dependencies that are not under the scheduler’s control, and this includes:

To eliminate those dependencies, we define the following eligibility criteria:

Implementation

The scheduler reports cumulative histogram metrics. The implementation will rely on three metrics:

The first two eligibility criteria are simple to enforce: pod_scheduling_duration_seconds{attempts=0}.

To enforce the last two criteria, we take the following approach:

vamossagar12 commented 3 years ago

Thanks @ahg-g . I was going through the description that you provided and needed a couple of clarifications: You have mentioned making use of 3 metrics which the scheduler reports. All of them are mentioned here: https://github.com/kubernetes/kubernetes/blob/44cd4fcedccbf35c2b674f9d53faa6fc3230b8fa/pkg/scheduler/metrics/metrics.go.. These metrics are reported by scheduler and are stored in Prometheus? Also, the other SLOs that i have seen makes queries to Prometheus servers to get these metrics. Do you envisage doing the same thing for the Scheduler Latency measurement?

ahg-g commented 3 years ago

These metrics are exported by the scheduler, I am not sure how and where clusterloader scrape them though.

vamossagar12 commented 3 years ago

I see that it is being invoked already for a measurement here:

https://github.com/kubernetes/perf-tests/blob/master/clusterloader2/pkg/measurement/common/metrics_for_e2e.go#L65-L72

It's pulling even the scheduler metrics here. I believe we should be able to use this to implement the logic you described above. WDYT @wojtek-t ?

wojtek-t commented 3 years ago

We don't have access to scheduler metrics in every environment. But I'm fine with assuming we have at least initially to have that enforced in our oss tests.

BTW - the eligibility criteria here is something that we've never fully figured out for pod startup SLO. We should do the same for that SLO for consistency, as this effectively is exactly what we want there.

@mm4tt - FYI

vamossagar12 commented 3 years ago

Thanks @wojtek-t . i started looking at this and am slightly confused as to which method to use for scraping this data. I see 2 different approaches for this: 1) metrics_for_e2e invokes the metricsGrabber interface which invokes APIs to get the data. So, on approach could be we hit this API after configured duration of time and get the values for the metrics we care for and use the logic pointed out in the eligibility criteria across windows to measure the performance. I see another approach for pod_startup_latency where it registers informer and uses the events and in the gather phase calculates the transition latencies. 2) the second approach is creating a PrometheusMeasurement and writing prometheus queries to fetch the metrics. In this case similar to the ones being used in api_responsiveness.

wojtek-t commented 3 years ago

We should go with (2).

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

wojtek-t commented 3 years ago

/remove-lifecycle stale /lifecycle frozen