kubernetes / apiserver

Library for writing a Kubernetes-style API server.
Apache License 2.0
647 stars 400 forks source link

etcd backend sharding support #63

Closed wilsonwang371 closed 1 week ago

wilsonwang371 commented 3 years ago

As the number of nodes in our k8s cluster increases significantly, we saw etcd gradually becoming our cluster performance bottleneck. apiserver already supports different objects using different etcd cluster. But in our case, we saw a large number of pods objects and we investigate if it is possible to use multiple shards within one resource type.

Some companies already do the sharding based on the key, similar to what TiKV did. However, I want to discuss the possibilities to use etcd shards based on the key hash.

For example, I have key k1 and key k2, after getting their md5 and mod ops, I will put k1 to etcd shard1 and k2 to etcd shard2. This will balance the loads among different etcd clusters and get higher throughput.

AFAIK, apiserver uses only a limited number of operations supported by etcd. These are Range/Txn/Watch operations. Also, regarding Txn operation, they are only simple transactions doing single Create, Update or Delete operation.

If we use the proposed sharding, for single Create/Update/Delete operations, it seems simple. But for Watch and Range request, apiserver needs to maintain a connection to each one of the etcd shard. Could there be any issue with this regarding the Range/Watch performance?

When apiserver holds multiple etcd shard connections, apiserver also need to remember each etcd shard's latest revision. Regarding this, I am thinking make changes to the APIObjectVersioner object so that a resource version vector is supported. It will be something similar to "{Shard1:Rev1,Shard2:Rev2}". So that the revision positions in each of the shard connection is kept.

Do you guys see any issue with this general design? I want to get some feedback from you guys so that if it is do-able, we can make this change and contribute this back to the opensource community.

xww commented 2 years ago

why close?i think it's a good idea

AlexanderYastrebov commented 1 year ago

Seems to be a duplicate of https://github.com/kubernetes/kubernetes/issues/98814

anveshreddy18 commented 5 months ago

Can we reopen this @wilsonwang371 ? the duplicate was closed as they wanted to put it as a KEP but it didn't go through. Having both the issues closed regarding an important issue doesn't seem ideal. I guess apiserver is the right repo to track this.

Or if the issue or proposal is being continued at a different place, can you pls refer it here to help others. Thanks

wilsonwang371 commented 5 months ago

I was thinking about this before, but finally, I made some contributions to etcd 3.5 and the TXN performance got significantly improved. So that's why I did not work on this for a long time.

It seems like many people are interested in this topic.

k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 1 month ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 1 week ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 1 week ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/apiserver/issues/63#issuecomment-2332823477): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.