k8snetworkplumbingwg / whereabouts

A CNI IPAM plugin that assigns IP addresses cluster-wide
Apache License 2.0
269 stars 120 forks source link

[BUG] Scale down of deployment left pod references undeleted #483

Open smoshiur1237 opened 1 week ago

smoshiur1237 commented 1 week ago

Describe the bug We are having a situation in our test cluster where Pod references left undeleted during scale down operation of a deployment. It get increased if we do multiple scale down/up operation. After scaling down of deployment from 200 to 1, most of the pods stays in Terninating state and needs long time to get deleted from the cluster. After the deletion some of the pod references are still visible and may increase after multiple scale up/down. Here is listed some queries during the process:

Scale down form 200 to 1 and when most of the pods are in Terminating state

date && kubectl get ippools.whereabouts.cni.cncf.io -n kube-system  2.2.2.0-24 -o yaml | grep -c podref 
Thu Jun 20 06:56:08 UTC 2024
161
date && kubectl get ippools.whereabouts.cni.cncf.io -n kube-system  3.3.3.0-24 -o yaml | grep -c podref
Thu Jun 20 06:53:34 UTC 2024
121

date && kubectl get ippools.whereabouts.cni.cncf.io -n kube-system  4.4.4.0-24 -o yaml | grep -c podref 
Thu Jun 20 06:53:55 UTC 2024
66

date && kubectl get ippools.whereabouts.cni.cncf.io -n kube-system  5.5.5.0-24 -o yaml | grep -c podref
Thu Jun 20 06:54:36 UTC 2024
11

kubectl get ippools.whereabouts.cni.cncf.io -n kube-system  6.6.6.0-24 -o yaml | grep -c podref 
Thu Jun 20 06:54:56 UTC 2024
1

kubectl get ippools.whereabouts.cni.cncf.io -n kube-system  7.7.7.0-24 -o yaml | grep -c podref 
Thu Jun 20 06:55:15 UTC 2024
1

date && kubectl get ippools.whereabouts.cni.cncf.io -n kube-system  8.8.8.0-24 -o yaml | grep -c podref 
Thu Jun 20 06:55:35 UTC 2024
1

kubectl get ippools.whereabouts.cni.cncf.io -n kube-system  9.9.9.0-24 -o yaml | grep -c podref 
Thu Jun 20 06:55:55 UTC 2024
1

When scale down from 200 to 1 is completed and all replicas are deleted

date && kubectl get ippools.whereabouts.cni.cncf.io -n kube-system  2.2.2.0-24 -o yaml | grep -c podref
Thu Jun 20 08:30:53 UTC 2024
2
date && kubectl get ippools.whereabouts.cni.cncf.io -n kube-system  3.3.3.0-24 -o yaml | grep -c podref
Thu Jun 20 08:32:41 UTC 2024
2
kubectl get ippools.whereabouts.cni.cncf.io -n kube-system  4.4.4.0-24 -o yaml | grep -c podref
2
kubectl get ippools.whereabouts.cni.cncf.io -n kube-system  5.5.5.0-24 -o yaml | grep -c podref
1
kubectl get ippools.whereabouts.cni.cncf.io -n kube-system  6.6.6.0-24 -o yaml | grep -c podref
1
kubectl get ippools.whereabouts.cni.cncf.io -n kube-system  7.7.7.0-24 -o yaml | grep -c podref
1
kubectl get ippools.whereabouts.cni.cncf.io -n kube-system  8.8.8.0-24 -o yaml | grep -c podref
1
kubectl get ippools.whereabouts.cni.cncf.io -n kube-system  9.9.9.0-24 -o yaml | grep -c podref
1

We can see one extra pod reference is visible in 2.2.2.0/24, 3.3.3.0/24 and 4.4.4.0/24 ranges. This will grow in case we do multiple scale up/down operation on deployment.

Expected behavior All podReference of deleted pods should be removed from the list and keep visible only running pod's podReference. For example expected output in this case where only one pod is running after scale down should be:

date && kubectl get ippools.whereabouts.cni.cncf.io -n kube-system  2.2.2.0-24 -o yaml | grep -c podref
Thu Jun 20 08:30:53 UTC 2024
1
date && kubectl get ippools.whereabouts.cni.cncf.io -n kube-system  3.3.3.0-24 -o yaml | grep -c podref
Thu Jun 20 08:32:41 UTC 2024
1
kubectl get ippools.whereabouts.cni.cncf.io -n kube-system  4.4.4.0-24 -o yaml | grep -c podref
1
kubectl get ippools.whereabouts.cni.cncf.io -n kube-system  5.5.5.0-24 -o yaml | grep -c podref
1
kubectl get ippools.whereabouts.cni.cncf.io -n kube-system  6.6.6.0-24 -o yaml | grep -c podref
1
kubectl get ippools.whereabouts.cni.cncf.io -n kube-system  7.7.7.0-24 -o yaml | grep -c podref
1
kubectl get ippools.whereabouts.cni.cncf.io -n kube-system  8.8.8.0-24 -o yaml | grep -c podref
1
kubectl get ippools.whereabouts.cni.cncf.io -n kube-system  9.9.9.0-24 -o yaml | grep -c podref
1

To Reproduce Steps to reproduce the behavior:

  1. Test with the whereabouts own test with kind by running make kind (1 control plane and 2 worker)
  2. Apply the NetworkAttachmentDefinition with different range such as range1, range2....range8
  3. Apply a deployment and scale it to 200
  4. Scale down the deployment to 1. Wait for all pods to get removed completely. Pods stays hanging long time in Terminating state and get eventually removed
  5. Then check ippools using the following command: kubectl get ippools.whereabouts.cni.cncf.io -n kube-system 9.9.9.0-24 -o yaml | grep -c podref , we can see extra podReference visible which should get removed after deletion of a pod.

Environment:

Additional info / context Add any other information / context about the problem here.

smoshiur1237 commented 1 week ago

/cc @mlguerrero12