Open engineater opened 3 months ago
This issue is currently awaiting triage.
SIG Docs takes a lead on issue triage for this website, but any Kubernetes member can accept issues by applying the triage/accepted
label.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
page related to the issue: https://kubernetes.io/docs/concepts/architecture/
Kubernetes Worker Nodes only contain kubelet, kube-proxy
Each node contains a kubelet, which communicate with the Kubernetes control plane. All nodes contain kube-proxy, that facilitates Kubernetes networking services.
There is no issue with the diagram. it depicts correctly
Control-plane is different role of node, but when I make technical interview about k8s. a candidates 100% sure that kubelet, kube-proxy are not on control-planes. I think the image is the problem.
yes, it is correct kubelet, kube-proxy are not on control-planes. The image also shows same.
I use k8s 1.29.5 deploy with standard kubeadm. So I can see that control-plane has kube-proxy.
$ kubectl get no -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ol9-master-151.my.private Ready control-plane 42d v1.29.5 192.168.1.151 <none> Oracle Linux Server 9.4 5.15.0-206.153.7.1.el9uek.x86_64 cri-o://1.29.4
ol9-worker-154.my.private Ready ingress,worker 42d v1.29.5 192.168.1.154 <none> Oracle Linux Server 9.4 5.15.0-206.153.7.1.el9uek.x86_64 cri-o://1.29.4
ol9-worker-155.my.private Ready ingress,worker 42d v1.29.5 192.168.1.155 <none> Oracle Linux Server 9.4 5.15.0-206.153.7.1.el9uek.x86_64 containerd://1.6.32
$ kubectl get po -A -o wide | grep proxy
kube-system kube-proxy-59p9n 1/1 Running 16 37d 192.168.1.154 ol9-worker-154.my.private <none> <none>
kube-system kube-proxy-9q7sz 1/1 Running 16 37d 192.168.1.151 ol9-master-151.my.private <none> <none>
kube-system kube-proxy-j2vcf 1/1 Running 17 (144m ago) 37d 192.168.1.155 ol9-worker-155.my.private <none> <none>
$ kubectl get po kube-proxy-9q7sz -n kube-system -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2024-06-01T04:49:49Z"
generateName: kube-proxy-
labels:
controller-revision-hash: f966846b6
k8s-app: kube-proxy
pod-template-generation: "2"
name: kube-proxy-9q7sz
namespace: kube-system
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: DaemonSet
name: kube-proxy
uid: 3721bef7-4306-43ff-85fb-e474916e4668
resourceVersion: "168239"
uid: 4b84e26a-cc63-443d-a260-1194073c001f
spec:
. . . . .
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2024-07-08T07:25:04Z"
status: "True"
type: PodReadyToStartContainers
- lastProbeTime: null
lastTransitionTime: "2024-06-01T04:49:49Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2024-07-08T07:25:04Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2024-07-08T07:25:04Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2024-06-01T04:49:49Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: cri-o://3bcbf5e11da200df6b191d8b00c3d7f389934a97f9d24c956c2ad7a949c324c7
image: registry.k8s.io/kube-proxy:v1.29.5
imageID: registry.k8s.io/kube-proxy@sha256:4c9681a68b0f068f66e6c4120be71a4416621cad1427802deaaa79d01fdffb85
lastState: {}
name: kube-proxy
ready: true
restartCount: 16
started: true
state:
running:
startedAt: "2024-07-08T07:25:04Z"
hostIP: 192.168.1.151
hostIPs:
- ip: 192.168.1.151
phase: Running
podIP: 192.168.1.151
podIPs:
- ip: 192.168.1.151
qosClass: BestEffort
startTime: "2024-06-01T04:49:49Z"
On control-plane I can see that kubelet is running
$ sudo pstree | grep kubelet
|-kubelet---15*[{kubelet}]
$ systemctl status kubelet.service
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Mon 2024-07-08 12:24:37 +05; 2h 45min ago
Docs: https://kubernetes.io/docs/
Main PID: 1396 (kubelet)
Tasks: 16 (limit: 12185)
Memory: 74.2M
CPU: 3min 7.576s
CGroup: /system.slice/kubelet.service
└─1396 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --pod-infra-container-image=registry.k8s.io/pause:3.9
. . . .
@engineater control-plane and master node both are different things. all components running on master nodes= are not control-place component. kube-proxy and kubelete run on every master as well as worker node. control-place component are those gives instruction to other nodes and manage cluster.
@kundan2707 well in k8s 1.29 (maby start from 1.27) we do not have concept "master" node. We have only concepts "control-plane" and "workers" nodes. So if we say "control plane" that is mean "control-plane". No "master" node, because we do not have it.
For my prove
$ kubectl get no ol9-master-151.my.private -o yaml | grep master
csi.volume.kubernetes.io/nodeid: '{"csi.tigera.io":"ol9-master-151.my.private"}'
kubernetes.io/hostname: ol9-master-151.my.private
name: ol9-master-151.my.private
- address: ol9-master-151.my.private
$ kubectl get no ol9-master-151.my.private -o yaml | grep control
volumes.kubernetes.io/controller-managed-attach-detach: "true"
node-role.kubernetes.io/control-plane: ""
key: node-role.kubernetes.io/control-plane
- registry.k8s.io/kube-controller-manager@sha256:a9a64e67b66ea6fb43f976f65d8a0cadd68b0ed5ed2311d2fc4bf887403ecf8a
- registry.k8s.io/kube-controller-manager@sha256:acdb952db121aa1e8182d70b36ceb868020e0435b7b8fd016dda1346acbc22a3
- registry.k8s.io/kube-controller-manager:v1.29.5
Well if the diagram show architecture whitout mistakes. Is it possible on page https://kubernetes.io/docs/concepts/architecture/ add some description that kubelet and kube-proxy also run on control-plane nodes? O maby add some couple phrases?
control-plane is equivalent to "master" and it refers to a collection of critical components that manage the essential of the cluster concept. control-plane nodes are nodes that are meant to run the control-plane components. There are different setups in production. For example, the control-plane components can be running on any of the "cluster nodes". They may be managed by the vendor rather than the cluster user. They can be deployed as static Pods or systemd services. When they are deployed as static Pods, you will need "kubelet" to bring them up, otherwise, you may and may not need kubelet at all.
There are many setups where the control-plane nodes are also used to deploy workload pods. You may want to deploy cluster management components such as authentication, 3rd-party webhook servers, monitoring, logging services there. If your control-plane nodes are big enough, you may want to host pods from some trusted namespaces as well.
In all, the whole thing is a pretty flexible. You choose your own topology, based on your business requirement, rather than what the typical setup is. They may and may not suit your needs.
@tengqm I think you explanation will be good to see on document https://kubernetes.io/docs/concepts/architecture/ .
the easiest fix is for the linked page to mention that this is an example reference architecture. additionally someone could expand with a paragraph of text what @tengqm has shared in his comment: https://github.com/kubernetes/website/issues/47111#issuecomment-2214009124
/help
@neolit123: This request has been marked as needing help from a contributor.
Please ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help
command.
/sig architecture
I have some ideas for this issue if no one has started already! /assign
thanks for the PR @robert-cronin . commented https://github.com/kubernetes/website/pull/47164
@engineater would #47164 help here?
The documentation is misleading that kube-proxy and kubelet are not on control-planes nodes.
On page https://kubernetes.io/docs/concepts/architecture/ you can see image that do not show components kube-proxy and kubelet on control-plane node. I think it is not one mistake in the documetation.
What would you like to be added On page https://kubernetes.io/docs/concepts/architecture/ in the diagram in "CONTROL PLANE" block need add kube-proxy and kubelet blocks. And I think need to add "CRI" block in "CONTROL PLANE" block too.