Open wojtek-t opened 4 years ago
@wojtek-t is this something that I can pick up?
I will provide some details early next week.
hi... Would it be possible to provide the details?
@wojtek-t I am wondering if we should expose a new metric similar to the ones proposed in pod resource metrics that reports pod-level latency metrics instead of relying on the aggregated histogram metrics we currently have. Such metric should make it a lot easier to implement various eligibility criteria. Let me raise that on the KEP.
Hmm - can we afford a metric stream per pod? We can have 150k pods in the cluster...
@ahg-g , just wanted to know would you be creating a KEP for this or is it something still that is under discussion?
The scheduling latency depends on multiple external dependencies that are not under the scheduler’s control, and this includes:
To eliminate those dependencies, we define the following eligibility criteria:
The scheduler reports cumulative histogram metrics. The implementation will rely on three metrics:
The first two eligibility criteria are simple to enforce: pod_scheduling_duration_seconds{attempts=0}.
To enforce the last two criteria, we take the following approach:
Thanks @ahg-g . I was going through the description that you provided and needed a couple of clarifications: You have mentioned making use of 3 metrics which the scheduler reports. All of them are mentioned here: https://github.com/kubernetes/kubernetes/blob/44cd4fcedccbf35c2b674f9d53faa6fc3230b8fa/pkg/scheduler/metrics/metrics.go.. These metrics are reported by scheduler and are stored in Prometheus? Also, the other SLOs that i have seen makes queries to Prometheus servers to get these metrics. Do you envisage doing the same thing for the Scheduler Latency measurement?
These metrics are exported by the scheduler, I am not sure how and where clusterloader scrape them though.
I see that it is being invoked already for a measurement here:
It's pulling even the scheduler metrics here. I believe we should be able to use this to implement the logic you described above. WDYT @wojtek-t ?
We don't have access to scheduler metrics in every environment. But I'm fine with assuming we have at least initially to have that enforced in our oss tests.
BTW - the eligibility criteria here is something that we've never fully figured out for pod startup SLO. We should do the same for that SLO for consistency, as this effectively is exactly what we want there.
@mm4tt - FYI
Thanks @wojtek-t . i started looking at this and am slightly confused as to which method to use for scraping this data. I see 2 different approaches for this: 1) metrics_for_e2e invokes the metricsGrabber interface which invokes APIs to get the data. So, on approach could be we hit this API after configured duration of time and get the values for the metrics we care for and use the logic pointed out in the eligibility criteria across windows to measure the performance. I see another approach for pod_startup_latency where it registers informer and uses the events and in the gather phase calculates the transition latencies. 2) the second approach is creating a PrometheusMeasurement and writing prometheus queries to fetch the metrics. In this case similar to the ones being used in api_responsiveness.
We should go with (2).
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale /lifecycle frozen
@ahg-g - if you can provide more details based on our internal work