Open snimje opened 3 months ago
Does this mean that joining a cluster to karmada has caused this ? what could be wrong ? As soon as I perform 'unjoin' operation on the cluster, I can see the resource utilisation for the new pods in the node's kubectl describe output.
Since Karmada just syncs the nodes and pods information and builds the resource model, it's kind of a ready
operation on member clusters, so, technically it won't change anything on member clusters, thus I don't think that would affect the behavior of kubectl describe nodes <node-name>
.
Add a kubernetes v1.26.10 cluster to karamada v1.8 API. add a deployment and create two nginx pods with 10m cpu and 256Mi memory. The Allocated resources on the member cluster do not change. the kubectl decsribe of the pod shows zero value under resource requests and limits column ( both CPU and Memory ) for the new pods deployed via karmada API.
I just had a test against the master branch, it works as expected when I describe the node of member cluster:
-bash-5.0# kubectl describe nodes member1-control-plane
Name: member1-control-plane
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=member1-control-plane
kubernetes.io/os=linux
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 05 Mar 2024 20:21:40 +0800
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: member1-control-plane
AcquireTime: <unset>
RenewTime: Sat, 09 Mar 2024 10:49:39 +0800
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 09 Mar 2024 10:46:53 +0800 Tue, 05 Mar 2024 20:21:40 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 09 Mar 2024 10:46:53 +0800 Tue, 05 Mar 2024 20:21:40 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 09 Mar 2024 10:46:53 +0800 Tue, 05 Mar 2024 20:21:40 +0800 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 09 Mar 2024 10:46:53 +0800 Tue, 05 Mar 2024 20:22:00 +0800 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 172.18.0.6
Hostname: member1-control-plane
Capacity:
cpu: 4
ephemeral-storage: 206100612Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 16393100Ki
pods: 110
Allocatable:
cpu: 4
ephemeral-storage: 206100612Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 16393100Ki
pods: 110
System Info:
Machine ID: 1d1ea9bf72da4fe6af2e02ec489b9395
System UUID: 7a76976d-f1aa-4be8-90e6-63c684ec8c95
Boot ID: 7a0d579b-c513-4d5c-9e68-3a6da3456c78
Kernel Version: 5.4.0-144-generic
OS Image: Debian GNU/Linux 11 (bullseye)
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.7.1
Kubelet Version: v1.27.3
Kube-Proxy Version: v1.27.3
PodCIDR: 10.10.0.0/24
PodCIDRs: 10.10.0.0/24
ProviderID: kind://docker/member1/member1-control-plane
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default nginx-77b4fdf86c-fnhzd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d14h
kube-system coredns-5d78c9869d-8djvq 100m (2%) 0 (0%) 70Mi (0%) 170Mi (1%) 3d14h
kube-system coredns-5d78c9869d-hvctt 100m (2%) 0 (0%) 70Mi (0%) 170Mi (1%) 3d14h
kube-system etcd-member1-control-plane 100m (2%) 0 (0%) 100Mi (0%) 0 (0%) 3d14h
kube-system kindnet-nbfhj 100m (2%) 100m (2%) 50Mi (0%) 50Mi (0%) 3d14h
kube-system kube-apiserver-member1-control-plane 250m (6%) 0 (0%) 0 (0%) 0 (0%) 3d14h
kube-system kube-controller-manager-member1-control-plane 200m (5%) 0 (0%) 0 (0%) 0 (0%) 3d14h
kube-system kube-proxy-nxc8b 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d14h
kube-system kube-scheduler-member1-control-plane 100m (2%) 0 (0%) 0 (0%) 0 (0%) 3d14h
kube-system metrics-server-dd4f7c854-bwhk8 100m (2%) 0 (0%) 200Mi (1%) 0 (0%) 3d14h
local-path-storage local-path-provisioner-6bc4bddd6b-q5xx8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d14h
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1050m (26%) 100m (2%)
memory 490Mi (3%) 390Mi (2%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events: <none>
Can you help to reproduce it again with the master branch? and share with us more details of the operation steps?
This is working as expected in 1.9.0.
OK, I'll try to reproduce it again with v1.8.1 and get back to you.
What happened: While trying customized-cluster-modeling feature I released that the Allocated CPU and Memory resources in the member cluster are not changing.
On the member cluster it looks like this when I do 'kubectl describe ' on the node
Does this mean that joining a cluster to karmada has caused this ? what could be wrong ? As soon as I perform 'unjoin' operation on the cluster, I can see the resource utilisation for the new pods in the node's kubectl describe output.
This issue is observed on karmada 1.8.
What you expected to happen: Karmada to show changes in resources of the member cluster. member cluster shows the allocated resources to new pods properly.
How to reproduce it (as minimally and precisely as possible): Add a kubernetes v1.26.10 cluster to karamada v1.8 API. add a deployment and create two nginx pods with 10m cpu and 256Mi memory. The Allocated resources on the member cluster do not change. the kubectl decsribe of the pod shows zero value under resource requests and limits column ( both CPU and Memory ) for the new pods deployed via karmada API.
Anything else we need to know?: As soon as I unjoin the node I can see the resource utilisation for the new pods.
Environment:
kubectl-karmada version
orkarmadactl version
): 1.8.1