Open ajaypraj opened 3 days ago
networking:
podSubnet: "172.16.0.0/24,fde1::/64"
serviceSubnet: "172.16.1.0/16,fde1::/112"
The IPv6 pod and service subnets overlap, I think? (they must not)
Annotations: cni.projectcalico.org/containerID: 67a2d6b03a05d10438b3b96f4ee60d2fd09e0341132f8040780c03c4981c7104
cni.projectcalico.org/podIP: 172.16.0.180/32
cni.projectcalico.org/podIPs: 172.16.0.180/32,fde1::5bb2:9224:62c3:c373/128
kubectl.kubernetes.io/restartedAt: 2024-10-08T00:21:27-07:00
Status: Running
IP: 172.16.0.180
IPs:
IP: 172.16.0.180
You can see from the annotation that Calico believes there are two IPs allocated here.
Calico is not responsible for populating the Status.PodIPS field - that comes from Kubernetes, so best to look into why k8s isn't setting that field. It might be due to the overlapping ranges issue @lwr20 mentioned?
I tried with different Ipv6 pod cidr so that that IPv6 address are not overlappped , but there is no changes on result.
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
nodeRegistration:
criSocket: "unix:///var/run/cri-dockerd.sock"
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: "v1.30.3"
networking:
podSubnet: "172.16.0.0/24,fde1:0:0:1::/64"
serviceSubnet: "172.16.1.0/24,fde1:0:0:2::/112"
dualStack: true
apiServer:
extraArgs:
advertise-address: "172.16.2.1"
tls-cipher-suites: "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
tlsCipherSuites:
- "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
- "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"
- "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"
I can see calico-kube controller log there is IP leaks for IPv6 address. Any suggestion or advice on this IP leak.
2024-10-10 08:21:43.200 [INFO][1] ipam_allocation.go 175: Candidate IP leak handle="k8s-pod-network.435c179953d504d7e8698a8bc57f90a633c81c46da388d6c63b9148594b50b84" ip="fde1::1:a1bc:8e13:85c:2c4d" node="delltechnologies-networkappliance1" pod="omni/omni-events-celery-worker-5c98765fc-c2wkk"
2024-10-10 08:21:47.252 [INFO][1] ipam_allocation.go 175: Candidate IP leak handle="k8s-pod-network.3680f8a3d028519bb0d2f4edb3eb816f18ca8169c5cf5b5e697f7145d8d75d9b" ip="fde1::1:a1bc:8e13:85c:2c4e" node="delltechnologies-networkappliance1" pod="omni/omni-events-celery-beat-5f66b4459-l97j7"
2024-10-10 08:24:42.195 [INFO][1] ipam_allocation.go 175: Candidate IP leak handle="k8s-pod-network.80705e75207c3817e99b22c731f519fce432d6c2582ac7be8db111245356358f" ip="fde1::1:a1bc:8e13:85c:2c50" node="delltechnologies-networkappliance1" pod="omni/omni-automation-app-celery-beat-5c68f48d8d-xzbc4"
2024-10-10 08:24:42.198 [INFO][1] ipam_allocation.go 175: Candidate IP leak handle="k8s-pod-network.cf26aa97154152bafbf2e15885f774949023d8f9f1606e40876a31da99bc7f03" ip="fde1::1:a1bc:8e13:85c:2c4f" node="delltechnologies-networkappliance1" pod="omni/omni-automation-app-celery-worker-8485b5cb46-t66w7"
2024-10-10 08:24:42.199 [INFO][1] ipam_allocation.go 175: Candidate IP leak handle="k8s-pod-network.1eac42710c7b0fdfd90c99e254d3b6d108761dd8daaaa890300673e6f40ac983" ip="fde1::1:a1bc:8e13:85c:2c51" node="delltechnologies-networkappliance1" pod="omni/ciam-0"
2024-10-10 08:35:33.572 [WARNING][1] ipam_allocation.go 196: Confirmed IP leak after 15m0.002899887s handle="k8s-pod-network.719a03bb8ee9e1a3049bb1368be1498d6d4696addd0007d3c9ff942738b19188" ip="fde1::1:a1bc:8e13:85c:2c40" node="delltechnologies-networkappliance1" pod="kube-system/coredns-7db6d8ff4d-vxx26"
2024-10-10 08:35:33.573 [WARNING][1] ipam_allocation.go 196: Confirmed IP leak after 15m0.003552967s handle="k8s-pod-network.f18d27fdd4823398eda0bbfbb3b069c2424ced1e0b6cd3071f76975f2a4b5b44" ip="fde1::1:a1bc:8e13:85c:2c41" node="delltechnologies-networkappliance1" pod="kube-system/coredns-7db6d8ff4d-fwscs"
2024-10-10 08:39:58.275 [WARNING][1] ipam_allocation.go 196: Confirmed IP leak after 18m32.633107536s handle="k8s-pod-network.a1276d83eeb172a93fd45bcc08e4a296e35bae3534aec8e69c0ea9dbf49d2012" ip="fde1::1:a1bc:8e13:85c:2c44" node="delltechnologies-networkappliance1" pod="omni/omni-queue-0"
2024-10-10 08:39:58.275 [WARNING][1] ipam_allocation.go 196: Confirmed IP leak after 18m32.633603016s handle="k8s-pod-network.a072363a0b57cdd1b51f1543e0edbaa0b992dd8bca81e71f5d219cc5a7ee610d" ip="fde1::1:a1bc:8e13:85c:2c43" node="delltechnologies-networkappliance1" pod="omni/omni-db-0"
2024-10-10 08:39:58.276 [WARNING][1] ipam_allocation.go 196: Confirmed IP leak after 18m27.393939706s handle="k8s-pod-network.eaeea463d07957630128dbf9b3ff50f0ec59c5cde22fb56cea0e7170fd8efb03" ip="fde1::1:a1bc:8e13:85c:2c45" node="delltechnologies-networkappliance1" pod="omni/omni-api-7d8f5bd47c-xn57h"
2024-10-10 08:39:58.277 [WARNING][1] ipam_allocation.go 196: Confirmed IP leak after 18m40.052962678s handle="k8s-pod-network.:
I am trying to setup single node dual stack cluster using Kubernetes 1.30.3 on-premise debian 11 VM. My cluster and work load is up and running . I can see dual IPs are assigned to services but for pods only IPv4 addresses are assigned , which should be also dual IPs . I have configured the calico for dual stack as per documentation , but Dual IPs from pod is missing.
Expected Behavior
Pod IPs should be populated with dual IPs. Here is output of kubectl describe po where IPs are only Ipv4 .
Calico node pod configuration
Here attaching calico node log calico_node.log
Current Behavior
All pods are services are up and running . Services are showing dual IPs as well. Nodes has both IPv4 and Ipv6 Ippools.
Descibing svc omni-api which shows dual IPs
Using below cluster config to setup the kuberenets cluster using kubeadm
Environment