etcd-io / etcd

Distributed reliable key-value store for the most critical data of a distributed system
https://etcd.io
Apache License 2.0
47.42k stars 9.73k forks source link

Define an official performance validation suite for etcd #16467

Open jmhbnz opened 1 year ago

jmhbnz commented 1 year ago

What would you like to be added?

The current performance validation process for etcd relies heavily on the Kubernetes scalability tests. While this approach has been valuable we need to create an official performance validation for etcd that is maintained within the project and therefore more accessible and integrated into regular project activity.

In my mind this will include developing a comprehensive suite of performance tests that cover various real-world usage scenarios. Integrating these tests into some form of on demand or scheduled etcd ci pipeline and making this accessible to work undertaken, for example ensure a pull request proposing upgrading a golang version can be validated for any performance regressions.

With this issue I would like to capture recent discussion in https://github.com/etcd-io/etcd/pull/16463#discussion_r1302775997 and the intent that we progress creating an independent and dedicated performance validation mechanism for etcd and ensure we do not lose sight of this work. We can use this issue to track any ideas and further conversation before starting any work.

References:

Why is this needed?


Sub task tracking

serathius commented 1 year ago

Talked with @mborsz who is member of Kubernetes SIG scalability about how we should approach performance testing of etcd. We came to conclusion that we need 3 things:

Based on above points the work is:

geetasg commented 1 year ago

should the etcd SLIs be part of the contract ? Ref: https://docs.google.com/document/d/1NUZDiJeiIH5vo_FMaTWf0JtrQKCx0kpEaIIuPoj9P6A/edit#heading=h.tlkin1a8b8bl?

jmhbnz commented 1 year ago

should the etcd SLIs be part of the contract ? Ref: https://docs.google.com/document/d/1NUZDiJeiIH5vo_FMaTWf0JtrQKCx0kpEaIIuPoj9P6A/edit#heading=h.tlkin1a8b8bl?

Potentially - Let's try and get some SLI's proposed initially and see how they fit in relation to the current contract? I have been meaning to sit down and list out potential SLI's here we can cherry pick from, feel free to do that same 🙏🏻

jmhbnz commented 10 months ago

Recording a discussion during kubecon na - Along with identify service level indicators as a starting point for this work we can also take lessons from kubernetes sig-scale to identify a set of dimensions that our new performance validation suite will have an envelope within: https://github.com/kubernetes/community/blob/master/sig-scalability/configs-and-limits/thresholds.md

We can review the older benchmark tooling to get a starting point on dimensions and iterate from there.

chaochn47 commented 6 months ago

Expect the performance test suite should help detect/prevent https://github.com/etcd-io/etcd/issues/17529 or in the robustness test kubernetes traffic.

Do we think there is a gap in general on performance testing? I can help addressing it.

@jmhbnz @serathius @ahrtr

jmhbnz commented 5 months ago

Expect the performance test suite should help detect/prevent #17529 or in the robustness test kubernetes traffic.

Do we think there is a gap in general on performance testing? I can help addressing it.

Thanks @chaochn47 - Yes my expectations from updated performance validation suite once complete is we can catch issues like the one linked earlier. @ivanvc is currently getting some basic prow jobs running that will be running some existing tools like tools/benchmark and tools/rw-heatmaps. We will need to think about if any additional tooling, or further updates to that existing tooling are required. If you have any ideas on that please feel free to draft an feature issue so we can discuss 🙏🏻

serathius commented 5 months ago

Expect the performance test suite should help detect/prevent #17529 or in the robustness test kubernetes traffic.

Do we think there is a gap in general on performance testing? I can help addressing it.

Don't think so, performance and correctness are pretty different beast that needs different approaches. Checking correctness requires a lot of overhead to check it, while performance measuring wants as little noise as possible to provide reproducible results.

What failed in #17529 was an unknown throughput breaking point that was hiding a correctness issue under it. I think we can use performance testing to discover more of such breaking points, and then try to simulate them during correctness testing. This was already done in the e2e test that you provided in https://github.com/etcd-io/etcd/pull/17555. Failpoint beforeSendWatchResponse can be used to simulate slow response writing, which can simulate the same performance breaking point. Please see https://github.com/etcd-io/etcd/pull/17680/files where I managed to reproduce the issue using the breaking point.

jmhbnz commented 3 months ago

Hi Team - @ivanvc and I would like to propose the first service level indicator. We are keen for your feedback on this first one before we move on to proposing additional.

Latency of processing mutating API calls, measured as 99th percentile over last 5 minutes

Mutating calls being put or del. This is an etcd iteration on the first entry in https://github.com/kubernetes/community/blob/master/sig-scalability/slos/api_call_latency.md.

Please let us know what you think. If this first SLI is accepted we will be updating tools/benchmark and/or tools/rw-heatmaps as required to support measurement of it and enable a formal SLO to be created in future.

chaochn47 commented 3 months ago

If we are intended to optimize etcd performance in kubernetes, IMHO we should generate k8s like traffic.

For example, rw-heatmap tool uses mixed read-only and write-only transactions, which does not have watch traffic simulated. Hopefully it is already in the roadmap.

serathius commented 3 months ago

If we are intended to optimize etcd performance in kubernetes, IMHO we should generate k8s like traffic.

We need both. This issue is important, but not getting enough attention. Unfortunately I don't have enough time to lead this, Is there someone that could work on this with my guidance?

chaochn47 commented 3 months ago

/assign

I can help since recently I am looking into etcd performance aspect.

jmhbnz commented 3 months ago

@serathius, @chaochn47 - Please let us know if the first etcd SLI drafted above looks ok. Agree watch is critical, there should be an SLI relating to this also. We intend to work iteratively to propose a larger table of SLI's as the k8s project have done.

serathius commented 3 months ago

@chaochn47 Can you start from creating a document where we can start discussing the SLIs? Maybe just copy K8s SLIs that make sense for etcd and we can iterate on that.

chaochn47 commented 3 months ago

@serathius This is the bare minimum doc etcd performance work stream that created from my head. I would fill in more details and PoC soon.

marseel commented 3 months ago

Visualization - To spot regressions we need to be able to observe trends and compare performance. Aside of per result reports, we should have a dashboard that aggregates results. At Google we use internal version of https://github.com/google/mako which is great, unfortunately looks like project has been archived. Kubernetes uses http://perf-dash.k8s.io/ which is pretty limited and will require code changes to support etcd. Please let me know if you have better suggestions.

For perfdash, I can offer guidance. Should be fairly straight-forward, essentially benchmark needs to output data in specific json format and copy it to GS bucket.

As mentioned in one of the comments in the doc above etcd performance work stream [also it would be great to make that doc public. ]:

I think that just replicating kube-apiserver access pattern in benchmarking is not enough. kube-apiserver access pattern to Etcd is very specific, with a very few clients connecting. Also, Kubernetes itself has some workarounds, due to poor performance of some of the features of Etcd. For example, looking at the events in Kubernetes:

So currently Kubernetes creates 1 lease per 1 minute window or 1k keys attached as a workaround results. I'm suspecting that most of that workaround was due to issue and probably poor performance of cleaning up expired leases.

Similarily, recently I chated with Marek about range request performance with limit. Currently, range request with a limit have linear performance (linear to number of keys in range, not limit) as it's counting all keys within range, I don't think anyone would expect range requests with limit to have such degraded performance.

Also, Etcd benchmarking doc mentions O(100k) watchers, which would never be replicated with k8s-based access pattern. While that benchmark is nice, AFAIK it doesn't capture the fact that usually client needs to make rage-request first, before establishing a watch.

SLI mentioned in the doc above:

- Latency of processing mutating API calls for single key, measured as 99th percentile over last 5 minutes
- Latency of processing non-streaming read-only API calls, measured as 99th percentile over last 5 minutes
- Watch latency for a key prefix (from the moment when object is stored in database to when it's ready to be sent to a dedicated watcher), measured as 99th percentile over last 5 minutes
- Grant / Revoke Lease latency

make sense, but only in the context of limits similar to Kubernetes limits For example:

To spice things up, in Cilium we also do use Etcd quite heavily. Usually, we have O(5k) clients watching O(100k) keys with a very small values - O(100 bytes) as compared to k8s that uses much larger values. We hit both of the issues mentioned above (large leases & linear rage requests), which fixing would probably benefit Kubernetes too. We can provide input from Cilium perspective what SLI / limits we would expect from Cilum perspective and help with validating results later on (/cc @giorio94 )

giorio94 commented 1 month ago

We can provide input from Cilium perspective what SLI / limits we would expect from Cilum perspective and help with validating results later on.

Sorry for the delay. In this respect, I've started prepared an initial document summarizing the scale/performance aspects and SLIs/SLOs from the Cilium perspective. Feel free to ask for any further question/clarification.

serathius commented 1 month ago

Thanks @giorio94, very detailed and throughout work. We will definitely include it in etcd SLIs. Is the goal of this document just to clarify Cillium requirements or is there any intention to help etcd qualify it?

giorio94 commented 1 month ago

Thanks!

Is the goal of this document just to clarify Cillium requirements or is there any intention to help etcd qualify it?

I'm personally happy to help with the definition of the benchmark suite, although I don't have a lot of context on the etcd internals. Just a note that I'll be out of office for the next couple of weeks.