kubernetes-sigs / kind

Kubernetes IN Docker - local clusters for testing Kubernetes
https://kind.sigs.k8s.io/
Apache License 2.0
13.48k stars 1.56k forks source link

improve KIND performance on single node clusters | leader-elect=false | proxy-refresh-interval=70000 #2513

Open dimetron opened 3 years ago

dimetron commented 3 years ago

In below CNCF Minikube video few items mentioned as performance improvements in Minikube over KIND Improving the Performance of Your Kubernetes Cluster - Priya Wadhwa, Google

Both Minikube and KIND can run as Docker mode, minikube is using same kindnet as well but additionally it disable leader election and reduce api server pooling

++ etc --proxy-refresh-interval=70000

++ kube-scheduler, kube-controller-manager --leader-elect=false

Q: Is this bring any benefits for KIND as defaults Maybe something we should mention as recommended extension for a single node cluster ?

aojea commented 3 years ago

/assign @BenTheElder

Both Minikube and KIND can run as Docker mode, minikube is using same kindnet as well but additionally it disable leader election and reduce api server pooling

AFAIK minikube is using KIND for running in docker, that is slightly different :smile:

BenTheElder commented 3 years ago

These are probably worth investigating :-)

For the original use case of kind (testing Kubernetes itself) we avoid deviating from kubeadm defaults for the most part, we don't want to miss a bug in the defaults for testing kubernetes / kubeadm.

Single node clusters however are not used for this purpose, and there's a few ways we could be tuning them a bit more (we also don't scale down the coreDNS deployment currently).

Also that said, kubeadm now also has it's own fairly extensive? testing with the kinder project which allows them to execute more quickly on CLI tools for testing kuebadm so it may be more reasonable to deviate in kind for this purpose, cc @neolit123

I'm not sure how dramatic the benefits are but it's probably worth exploring. For the most part we've tried to bring optimizations to the upstream components, or in kindnetd (which has no upstream equivilant ...).

neolit123 commented 3 years ago

Adjusting these knobs implies that the user knows the control plane will remain at one node. Both Kind and Kinder do not support scaling the control plane from e.g. 1 to 3, but for Kinder we actually want to support scale up /down in the future to test etcd stability. Today with Kinder we only test HA control planes, because that is what is run in production.

If Kind prefers to remain with a static node topology that does not change post cluster creation these adjustments make sense. But I think I slightly prefer documenting these settings instead of applying them.

This can also be mentioned in the kubeadm "create cluster" guide.

BenTheElder commented 2 years ago

I would love to see the impact of these parameters quantified in kind clusters, and based on that evidence, PRs to tune them.

We can also tune them differently depending on the number of nodes.