projectcalico / calico

Cloud native networking and network security
https://docs.tigera.io/calico/latest/about/
Apache License 2.0
5.94k stars 1.32k forks source link

Calico failed to assign dual IPs to pod #9312

Open ajaypraj opened 3 days ago

ajaypraj commented 3 days ago

I am trying to setup single node dual stack cluster using Kubernetes 1.30.3 on-premise debian 11 VM. My cluster and work load is up and running . I can see dual IPs are assigned to services but for pods only IPv4 addresses are assigned , which should be also dual IPs . I have configured the calico for dual stack as per documentation , but Dual IPs from pod is missing.

Expected Behavior

Pod IPs should be populated with dual IPs. Here is output of kubectl describe po where IPs are only Ipv4 .

root@delltechnologies-networkappliance:~# kubectl describe po omni-api-6d749d569f-b8nxz -n omni
Name:             omni-api-6d749d569f-b8nxz
Namespace:        omni
Priority:         0
Service Account:  default
Node:             delltechnologies-networkappliance/100.104.26.88
Start Time:       Tue, 08 Oct 2024 00:21:27 -0700
Labels:           com.dell.omni.alias=Delaware_Application_Server
                  com.dell.omni.startup=Automatic
                  com.dell.omni.type=Application
                  pod-template-hash=6d749d569f
Annotations:      cni.projectcalico.org/containerID: 67a2d6b03a05d10438b3b96f4ee60d2fd09e0341132f8040780c03c4981c7104
                  cni.projectcalico.org/podIP: 172.16.0.180/32
                  cni.projectcalico.org/podIPs: 172.16.0.180/32,fde1::5bb2:9224:62c3:c373/128
                  kubectl.kubernetes.io/restartedAt: 2024-10-08T00:21:27-07:00
Status:           Running
IP:               172.16.0.180
IPs:
  IP:           172.16.0.180
Controlled By:  ReplicaSet/omni-api-6d749d569f
Containers:
  omni-api-cont:
    Container ID:  docker://29846a5da6100040106ba5c8985a5eabef3c731a1a76f089fb00728e14461d8a
    Image:         omni_api:3.7.0.55
    Image ID:      docker://sha256:5ee5314c8cb1de33d8de2fd6d8478a8457056271f64303a9ec13769048e30ac0
    Port:          8080/TCP
    Host Port:     0/TCP
    Command:
      /bin/bash
      -c
      python -c "from vcenterapp.dell_model import migrate;"
      python -c "from vcenterapp.cache import populate_cache;"
      /usr/local/bin/gunicorn -w 8 --worker-class=gevent --worker-connections=1000 -t 120 -b $HOSTNAME:8080 --certfile=/app/config/sslworkspace/dellIsengardServer-crt.pem --keyfile=/app/config/sslworkspace/dellIsengardServer-key.pem vcenterapp.wsgi:application
 State:          Running
      Started:      Tue, 08 Oct 2024 00:21:28 -0700
    Ready:          True
    Restart Count:  0
    Environment Variables from:
      app-env     ConfigMap  Optional: false
    Environment:  <none>
    Mounts:
      /app/config/backupFiles from backupfiles (rw)
      /app/config/db from dell-config (rw)
      /app/config/hostname from hostname (rw)
      /app/config/images from images (rw)
      /app/config/log from app-config-log (rw)
      /app/config/sslworkspace from sslworkspace (rw)
      /app/config/upgradeFiles from upgradefiles (rw)
      /app/config/upgrades from upgrades (rw)
      /app/dep_vcenter_template.yaml from vc-dep-yaml-template (rw)
      /app/kubeconfig from kubeconfig (rw)
      /etc/group from etc-group (ro)
      /etc/localtime from time (ro)
      /etc/passwd from etc-passwd (ro)
      /etc/shadow from etc-shadow (ro)
      /home/isengard from homedir (rw)
      /var/log/omni from omni-log (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lpd5x (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       True
  ContainersReady             True
  PodScheduled                True
Volumes:
  etc-group:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/group
    HostPathType:
  etc-passwd:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/passwd
    HostPathType:
  etc-shadow:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/shadow
    HostPathType:
  dell-config:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/dell/config
    HostPathType:
  app-config-log:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log/omni/omni_api
    HostPathType:
  omni-log:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log/omni
    HostPathType:
  homedir:
    Type:          HostPath (bare host directory volume)
    Path:          /home/isengard
    HostPathType:
  images:
    Type:          HostPath (bare host directory volume)
    Path:          /home/isengard/workspace/omni/images
    HostPathType:
  upgrades:
    Type:          HostPath (bare host directory volume)
    Path:          /home/isengard/workspace/omni/upgradefiles
    HostPathType:
  backupfiles:
    Type:          HostPath (bare host directory volume)
    Path:          /home/isengard/workspace/omni/backupFiles
    HostPathType:
  upgradefiles:
    Type:          HostPath (bare host directory volume)
    Path:          /home/isengard/upgrade
    HostPathType:
  sslworkspace:
    Type:          HostPath (bare host directory volume)
    Path:          /home/isengard/workspace/sslworkspace
    HostPathType:
  time:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/localtime
    HostPathType:
  hostname:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/hostname
    HostPathType:
  kubeconfig:
    Type:          HostPath (bare host directory volume)
    Path:          /home/isengard/.kube/config
    HostPathType:
  vc-dep-yaml-template:
    Type:          HostPath (bare host directory volume)
    Path:          /home/isengard/vcenterapp/k8s_resources/vc/dep_vcenter_template.yaml
    HostPathType:
  kube-api-access-lpd5x:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>

Calico node pod configuration

root@delltechnologies-networkappliance:~# kubectl describe po calico-node-w668z -n kube-system
Name:                 calico-node-w668z
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      calico-node
Node:                 delltechnologies-networkappliance/100.104.26.88
Start Time:           Mon, 07 Oct 2024 22:23:40 -0700
Labels:               controller-revision-hash=676df87d4f
                      k8s-app=calico-node
                      pod-template-generation=1
Annotations:          <none>
Status:               Running
IP:                   100.104.26.88
IPs:
  IP:           100.104.26.88
Controlled By:  DaemonSet/calico-node
Init Containers:
  upgrade-ipam:
    Container ID:  docker://b8a67c93ce48dfa282ef0b82bf70b657e0817504af301bd8c69d3b9c4a87ece4
    Image:         docker.io/calico/cni:v3.28.1
    Image ID:      docker://sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/cni/bin/calico-ipam
      -upgrade
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 08 Oct 2024 00:18:12 -0700
      Finished:     Tue, 08 Oct 2024 00:18:13 -0700
    Ready:          True
    Restart Count:  1
    Environment Variables from:
      kubernetes-services-endpoint  ConfigMap  Optional: true
    Environment:
      KUBERNETES_NODE_NAME:        (v1:spec.nodeName)
      CALICO_NETWORKING_BACKEND:  <set to the key 'calico_backend' of config map 'calico-config'>  Optional: false
    Mounts:
      /host/opt/cni/bin from cni-bin-dir (rw)
      /var/lib/cni/networks from host-local-net-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tp82w (ro)
  install-cni:
    Container ID:  docker://5037590bd6be6121284a623b9c5f76b09855cb79e6c9236e88794cba921b8316
    Image:         docker.io/calico/cni:v3.28.1
    Image ID:      docker://sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/cni/bin/install
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 08 Oct 2024 00:18:13 -0700
      Finished:     Tue, 08 Oct 2024 00:18:14 -0700
    Ready:          True
    Restart Count:  0
    Environment Variables from:
      kubernetes-services-endpoint  ConfigMap  Optional: true
    Environment:
      CNI_CONF_NAME:         10-calico.conflist
      CNI_NETWORK_CONFIG:    <set to the key 'cni_network_config' of config map 'calico-config'>  Optional: false
      KUBERNETES_NODE_NAME:   (v1:spec.nodeName)
      CNI_MTU:               <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      SLEEP:                 false
    Mounts:
      /host/etc/cni/net.d from cni-net-dir (rw)
      /host/opt/cni/bin from cni-bin-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tp82w (ro)
  mount-bpffs:
    Container ID:  docker://d069d63742cd4bacb2b09f7aa45d947c27a088933cd363376489f796510d3e58
    Image:         docker.io/calico/node:v3.28.1
    Image ID:      docker://sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc
    Port:          <none>
    Host Port:     <none>
    Command:
      calico-node
      -init
      -best-effort
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 08 Oct 2024 00:18:14 -0700
      Finished:     Tue, 08 Oct 2024 00:18:15 -0700
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /nodeproc from nodeproc (ro)
      /sys/fs from sys-fs (rw)
      /var/run/calico from var-run-calico (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tp82w (ro)
Containers:
  calico-node:
    Container ID:   docker://cb29c166f32cd71f0e13f43edc579669a3514869c184f5878692a0b3c3a1ef37
    Image:          docker.io/calico/node:v3.28.1
    Image ID:       docker://sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Tue, 08 Oct 2024 00:18:15 -0700
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 07 Oct 2024 23:51:38 -0700
      Finished:     Tue, 08 Oct 2024 00:15:45 -0700
    Ready:          True
    Restart Count:  23
    Requests:
      cpu:      250m
    Liveness:   exec [/bin/calico-node -felix-live -bird-live] delay=10s timeout=10s period=10s #success=1 #failure=6
    Readiness:  exec [/bin/calico-node -felix-ready -bird-ready] delay=0s timeout=10s period=10s #success=1 #failure=3
    Environment Variables from:
      kubernetes-services-endpoint  ConfigMap  Optional: true
    Environment:
      DATASTORE_TYPE:                     kubernetes
      WAIT_FOR_DATASTORE:                 true
      NODENAME:                            (v1:spec.nodeName)
      CALICO_NETWORKING_BACKEND:          <set to the key 'calico_backend' of config map 'calico-config'>  Optional: false
      CLUSTER_TYPE:                       k8s,bgp
      IP:                                 autodetect
      IP6:                                autodetect
      CALICO_IPV6POOL_NAT_OUTGOING:       true
      CALICO_IPV4POOL_IPIP:               Always
      CALICO_IPV4POOL_VXLAN:              Never
      CALICO_IPV6POOL_VXLAN:              Never
      FELIX_IPINIPMTU:                    <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      FELIX_VXLANMTU:                     <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      FELIX_WIREGUARDMTU:                 <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      CALICO_DISABLE_FILE_LOGGING:        true
      FELIX_DEFAULTENDPOINTTOHOSTACTION:  ACCEPT
      FELIX_IPV6SUPPORT:                  true
      FELIX_HEALTHENABLED:                true
    Mounts:
      /host/etc/cni/net.d from cni-net-dir (rw)
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /sys/fs/bpf from bpffs (rw)
      /var/lib/calico from var-lib-calico (rw)
      /var/log/calico/cni from cni-log-dir (ro)
      /var/run/calico from var-run-calico (rw)
      /var/run/nodeagent from policysync (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tp82w (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       True
  ContainersReady             True
  PodScheduled                True
Volumes:
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:
  var-run-calico:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/calico
    HostPathType:  DirectoryOrCreate
  var-lib-calico:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/calico
    HostPathType:  DirectoryOrCreate
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  sys-fs:
    Type:          HostPath (bare host directory volume)
    Path:          /sys/fs/
    HostPathType:  DirectoryOrCreate
  bpffs:
    Type:          HostPath (bare host directory volume)
    Path:          /sys/fs/bpf
    HostPathType:  Directory
  nodeproc:
    Type:          HostPath (bare host directory volume)
    Path:          /proc
    HostPathType:
  cni-bin-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/cni/bin
    HostPathType:  DirectoryOrCreate
  cni-net-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:
  cni-log-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log/calico/cni
    HostPathType:
  host-local-net-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/cni/networks
    HostPathType:
  policysync:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/nodeagent
    HostPathType:  DirectoryOrCreate
  kube-api-access-tp82w:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 :NoSchedule op=Exists
                             :NoExecute op=Exists
                             CriticalAddonsOnly op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:                      <none>

Here attaching calico node log calico_node.log

Current Behavior

All pods are services are up and running . Services are showing dual IPs as well. Nodes has both IPv4 and Ipv6 Ippools.

root@delltechnologies-networkappliance:~# kubectl get po -A
NAMESPACE     NAME                                                        READY   STATUS    RESTARTS       AGE
kube-system   calico-kube-controllers-77d59654f4-bwfhk                    1/1     Running   1 (91m ago)    3h23m
kube-system   calico-node-w668z                                           1/1     Running   23 (91m ago)   3h23m
kube-system   coredns-7db6d8ff4d-bhxts                                    1/1     Running   1 (91m ago)    3h24m
kube-system   coredns-7db6d8ff4d-q6r54                                    1/1     Running   1 (91m ago)    3h24m
kube-system   etcd-delltechnologies-networkappliance                      1/1     Running   1 (91m ago)    3h24m
kube-system   kube-apiserver-delltechnologies-networkappliance            1/1     Running   1 (91m ago)    3h24m
kube-system   kube-controller-manager-delltechnologies-networkappliance   1/1     Running   1 (91m ago)    3h24m
kube-system   kube-proxy-cxx9h                                            1/1     Running   1 (91m ago)    3h24m
kube-system   kube-scheduler-delltechnologies-networkappliance            1/1     Running   1 (91m ago)    3h24m
omni          ciam-0                                                      1/1     Running   0              84m
omni          omni-api-6d749d569f-b8nxz                                   1/1     Running   0              85m
omni          omni-api-app-celery-beat-d476b5db7-dqhjz                    1/1     Running   0              85m
omni          omni-api-celery-worker-6866b9f6cc-9ncxt                     1/1     Running   0              85m
omni          omni-automation-app-celery-beat-f95d6f74b-mbqxb             1/1     Running   1 (85m ago)    85m
omni          omni-automation-app-celery-worker-7c87d66d85-ftbzh          1/1     Running   1 (85m ago)    85m
omni          omni-db-0                                                   1/1     Running   0              85m
omni          omni-events-celery-beat-84d846f4f5-vvlj9                    1/1     Running   1 (85m ago)    85m
omni          omni-events-celery-worker-588999988f-5qrm2                  1/1     Running   1 (85m ago)    85m
omni          omni-events-receiver-777999f45b-zb895                       1/1     Running   1 (85m ago)    85m
omni          omni-nginx-6498b55f9b-7fvz6                                 1/1     Running   0              85m
omni          omni-queue-0                                                1/1     Running   0              85m
omni          omni-redis-7fdc69575b-w2g7h                                 1/1     Running   0              85m
omni          omni-services-59d9f4f758-pxlbh                              1/1     Running   1 (85m ago)    85m
omni          omni-services-celery-worker-69f7c65668-thskh                1/1     Running   0              85m

Descibing svc omni-api which shows dual IPs

root@delltechnologies-networkappliance:~# kubectl describe svc omni-api -n omni
Name:              omni-api
Namespace:         omni
Labels:            <none>
Annotations:       <none>
Selector:          com.dell.omni.alias=Delaware_Application_Server
Type:              ClusterIP
IP Family Policy:  PreferDualStack
IP Families:       IPv4,IPv6
IP:                172.16.199.93
IPs:               172.16.199.93,fde1::f66d
Port:              port-8080  8080/TCP
TargetPort:        8080/TCP
Endpoints:         172.16.0.180:8080
Session Affinity:  None
Events:            <none>
root@delltechnologies-networkappliance:~# calicoctl get ippools
NAME                  CIDR            SELECTOR
default-ipv4-ippool   172.16.0.0/24   all()
default-ipv6-ippool   fde1::/64       all()

Using below cluster config to setup the kuberenets cluster using kubeadm

apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
nodeRegistration:
  criSocket: "unix:///var/run/cri-dockerd.sock"

---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: "v1.30.3"
networking:
  podSubnet: "172.16.0.0/24,fde1::/64"
  serviceSubnet: "172.16.1.0/16,fde1::/112"
apiServer:
  extraArgs:
    advertise-address: "172.16.2.1"
    tls-cipher-suites: "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"

---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
tlsCipherSuites:
  - "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
  - "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"
  - "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"

Environment


* CRI interface: cri-dockerd
* cgroup driver: systemd

Requesting you to kindly look into it, or suggest possible cause for failure of dual IP assignment or solution to correct the same.
lwr20 commented 2 days ago
networking:
  podSubnet: "172.16.0.0/24,fde1::/64"
  serviceSubnet: "172.16.1.0/16,fde1::/112"

The IPv6 pod and service subnets overlap, I think? (they must not)

caseydavenport commented 2 days ago
Annotations:      cni.projectcalico.org/containerID: 67a2d6b03a05d10438b3b96f4ee60d2fd09e0341132f8040780c03c4981c7104
                  cni.projectcalico.org/podIP: 172.16.0.180/32
                  cni.projectcalico.org/podIPs: 172.16.0.180/32,fde1::5bb2:9224:62c3:c373/128
                  kubectl.kubernetes.io/restartedAt: 2024-10-08T00:21:27-07:00
Status:           Running
IP:               172.16.0.180
IPs:
  IP:           172.16.0.180

You can see from the annotation that Calico believes there are two IPs allocated here.

Calico is not responsible for populating the Status.PodIPS field - that comes from Kubernetes, so best to look into why k8s isn't setting that field. It might be due to the overlapping ranges issue @lwr20 mentioned?

ajaypraj commented 19 hours ago

I tried with different Ipv6 pod cidr so that that IPv6 address are not overlappped , but there is no changes on result.

apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
nodeRegistration:
  criSocket: "unix:///var/run/cri-dockerd.sock"

---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: "v1.30.3"
networking:
  podSubnet: "172.16.0.0/24,fde1:0:0:1::/64"
  serviceSubnet: "172.16.1.0/24,fde1:0:0:2::/112"
  dualStack: true
apiServer:
  extraArgs:
    advertise-address: "172.16.2.1"
    tls-cipher-suites: "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"

---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
tlsCipherSuites:
  - "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
  - "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"
  - "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"

I can see calico-kube controller log there is IP leaks for IPv6 address. Any suggestion or advice on this IP leak.

2024-10-10 08:21:43.200 [INFO][1] ipam_allocation.go 175: Candidate IP leak handle="k8s-pod-network.435c179953d504d7e8698a8bc57f90a633c81c46da388d6c63b9148594b50b84" ip="fde1::1:a1bc:8e13:85c:2c4d" node="delltechnologies-networkappliance1" pod="omni/omni-events-celery-worker-5c98765fc-c2wkk"
2024-10-10 08:21:47.252 [INFO][1] ipam_allocation.go 175: Candidate IP leak handle="k8s-pod-network.3680f8a3d028519bb0d2f4edb3eb816f18ca8169c5cf5b5e697f7145d8d75d9b" ip="fde1::1:a1bc:8e13:85c:2c4e" node="delltechnologies-networkappliance1" pod="omni/omni-events-celery-beat-5f66b4459-l97j7"
2024-10-10 08:24:42.195 [INFO][1] ipam_allocation.go 175: Candidate IP leak handle="k8s-pod-network.80705e75207c3817e99b22c731f519fce432d6c2582ac7be8db111245356358f" ip="fde1::1:a1bc:8e13:85c:2c50" node="delltechnologies-networkappliance1" pod="omni/omni-automation-app-celery-beat-5c68f48d8d-xzbc4"
2024-10-10 08:24:42.198 [INFO][1] ipam_allocation.go 175: Candidate IP leak handle="k8s-pod-network.cf26aa97154152bafbf2e15885f774949023d8f9f1606e40876a31da99bc7f03" ip="fde1::1:a1bc:8e13:85c:2c4f" node="delltechnologies-networkappliance1" pod="omni/omni-automation-app-celery-worker-8485b5cb46-t66w7"
2024-10-10 08:24:42.199 [INFO][1] ipam_allocation.go 175: Candidate IP leak handle="k8s-pod-network.1eac42710c7b0fdfd90c99e254d3b6d108761dd8daaaa890300673e6f40ac983" ip="fde1::1:a1bc:8e13:85c:2c51" node="delltechnologies-networkappliance1" pod="omni/ciam-0"
2024-10-10 08:35:33.572 [WARNING][1] ipam_allocation.go 196: Confirmed IP leak after 15m0.002899887s handle="k8s-pod-network.719a03bb8ee9e1a3049bb1368be1498d6d4696addd0007d3c9ff942738b19188" ip="fde1::1:a1bc:8e13:85c:2c40" node="delltechnologies-networkappliance1" pod="kube-system/coredns-7db6d8ff4d-vxx26"
2024-10-10 08:35:33.573 [WARNING][1] ipam_allocation.go 196: Confirmed IP leak after 15m0.003552967s handle="k8s-pod-network.f18d27fdd4823398eda0bbfbb3b069c2424ced1e0b6cd3071f76975f2a4b5b44" ip="fde1::1:a1bc:8e13:85c:2c41" node="delltechnologies-networkappliance1" pod="kube-system/coredns-7db6d8ff4d-fwscs"
2024-10-10 08:39:58.275 [WARNING][1] ipam_allocation.go 196: Confirmed IP leak after 18m32.633107536s handle="k8s-pod-network.a1276d83eeb172a93fd45bcc08e4a296e35bae3534aec8e69c0ea9dbf49d2012" ip="fde1::1:a1bc:8e13:85c:2c44" node="delltechnologies-networkappliance1" pod="omni/omni-queue-0"
2024-10-10 08:39:58.275 [WARNING][1] ipam_allocation.go 196: Confirmed IP leak after 18m32.633603016s handle="k8s-pod-network.a072363a0b57cdd1b51f1543e0edbaa0b992dd8bca81e71f5d219cc5a7ee610d" ip="fde1::1:a1bc:8e13:85c:2c43" node="delltechnologies-networkappliance1" pod="omni/omni-db-0"
2024-10-10 08:39:58.276 [WARNING][1] ipam_allocation.go 196: Confirmed IP leak after 18m27.393939706s handle="k8s-pod-network.eaeea463d07957630128dbf9b3ff50f0ec59c5cde22fb56cea0e7170fd8efb03" ip="fde1::1:a1bc:8e13:85c:2c45" node="delltechnologies-networkappliance1" pod="omni/omni-api-7d8f5bd47c-xn57h"
2024-10-10 08:39:58.277 [WARNING][1] ipam_allocation.go 196: Confirmed IP leak after 18m40.052962678s handle="k8s-pod-network.: