When enabling the flags "ENABLE_V6_EGRESS" & "ENABLE_V4_EGRESS" on IPv6 and IPv4 clusters using Terraform, the flag is not being applied to init containers on aws-node pods. The flag gets applied only to the aws-node container but not to init containers.
Description
When enabling the flags "ENABLE_V6_EGRESS" & "ENABLE_V4_EGRESS" on IPv6 and IPv4 clusters using Terraform, the flag is not being applied to init containers on aws-node pods. The flag gets applied only to the aws-node container but not to init containers.
Versions
Module version [Required]: . ├── provider[registry.terraform.io/hashicorp/aws] >= 4.47.0 ├── provider[registry.terraform.io/hashicorp/helm] >= 2.9.0 ├── provider[registry.terraform.io/hashicorp/kubernetes] >= 2.20.0 ├── provider[terraform.io/builtin/terraform] ├── module.eks │ ├── provider[registry.terraform.io/hashicorp/aws] >= 4.57.0 │ ├── provider[registry.terraform.io/hashicorp/tls] >= 3.0.0 │ ├── provider[registry.terraform.io/hashicorp/kubernetes] >= 2.10.0 │ ├── provider[registry.terraform.io/hashicorp/time] >= 0.9.0 │ ├── module.eks_managed_node_group │ │ ├── provider[registry.terraform.io/hashicorp/aws] >= 4.57.0 │ │ └── module.user_data │ │ └── provider[registry.terraform.io/hashicorp/cloudinit] >= 2.0.0 │ ├── module.fargate_profile │ │ └── provider[registry.terraform.io/hashicorp/aws] >= 4.57.0 │ ├── module.kms │ │ └── provider[registry.terraform.io/hashicorp/aws] >= 4.33.0 │ └── module.self_managed_node_group │ ├── provider[registry.terraform.io/hashicorp/aws] >= 4.57.0 │ └── module.user_data │ └── provider[registry.terraform.io/hashicorp/cloudinit] >= 2.0.0
Terraform version: 1.9.4 EKS: 1.30
Reproduction Code
Steps to reproduce the behavior:
Expected behavior
The ENABLE_V6_EGRESS flag should be applied to both the aws-node container and its init container (aws-vpc-cni-init).
kubectl describe pod on aws-node
kubectl describe pod -n kube-system aws-node-66zp7 --context="$CTX_CLUSTER_1" Name: aws-node-66zp7 Namespace: kube-system Priority: 2000001000 Priority Class Name: system-node-critical Service Account: aws-node Node: ip-10-5-12-183.us-west-2.compute.internal/10.5.12.183 Start Time: Sun, 24 Nov 2024 13:27:32 -0800 Labels: app.kubernetes.io/instance=aws-vpc-cni app.kubernetes.io/name=aws-node controller-revision-hash=c5c74b48c k8s-app=aws-node pod-template-generation=2 Annotations: <none> Status: Running IP: 10.5.12.183 IPs: IP: 10.5.12.183 Controlled By: DaemonSet/aws-node Init Containers: aws-vpc-cni-init: Container ID: containerd://351d66d6b20fb593635ddcd01c1d9a685387409b2f4808557395ea68344bdbec Image: 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni-init:v1.19.0-eksbuild.1 Image ID: 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni-init@sha256:ce36e6fc8457a3c79eab29ad7ca86ebc9220056c443e15502eeab7ceeef8496f Port: <none> Host Port: <none> State: Terminated Reason: Completed Exit Code: 0 Started: Sun, 24 Nov 2024 13:27:32 -0800 Finished: Sun, 24 Nov 2024 13:27:32 -0800 Ready: True Restart Count: 0 Requests: cpu: 25m Environment: DISABLE_TCP_EARLY_DEMUX: false ENABLE_IPv6: false Mounts: /host/opt/cni/bin from cni-bin-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c8cnf (ro) Containers: aws-node: Container ID: containerd://e6add8bd87bc629d349589811f4e5e02aecb32aa7be82b448d05039c51a986fc Image: 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni:v1.19.0-eksbuild.1 Image ID: 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni@sha256:efada7e5222a3376dc170b43b569f4dea762fd58186467c233b512bd6ab5415b Port: 61678/TCP Host Port: 61678/TCP State: Running Started: Sun, 24 Nov 2024 13:27:33 -0800 Ready: True Restart Count: 0 Requests: cpu: 25m Liveness: exec [/app/grpc-health-probe -addr=:50051 -connect-timeout=5s -rpc-timeout=5s] delay=60s timeout=10s period=10s #success=1 #failure=3 Readiness: exec [/app/grpc-health-probe -addr=:50051 -connect-timeout=5s -rpc-timeout=5s] delay=1s timeout=10s period=10s #success=1 #failure=3 Environment: ADDITIONAL_ENI_TAGS: {} ANNOTATE_POD_IP: false AWS_VPC_CNI_NODE_PORT_SUPPORT: true AWS_VPC_ENI_MTU: 9001 AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG: false AWS_VPC_K8S_CNI_EXTERNALSNAT: false AWS_VPC_K8S_CNI_LOGLEVEL: DEBUG AWS_VPC_K8S_CNI_LOG_FILE: /host/var/log/aws-routed-eni/ipamd.log AWS_VPC_K8S_CNI_RANDOMIZESNAT: prng AWS_VPC_K8S_CNI_VETHPREFIX: eni AWS_VPC_K8S_PLUGIN_LOG_FILE: /var/log/aws-routed-eni/plugin.log AWS_VPC_K8S_PLUGIN_LOG_LEVEL: DEBUG CLUSTER_ENDPOINT: https://6C97BC22D3D9E438B5D00146C4CB22C0.gr7.us-west-2.eks.amazonaws.com CLUSTER_NAME: ipvfour DISABLE_INTROSPECTION: false DISABLE_METRICS: false DISABLE_NETWORK_RESOURCE_PROVISIONING: false ENABLE_IPv4: true ENABLE_IPv6: false ENABLE_POD_ENI: false ENABLE_PREFIX_DELEGATION: false ENABLE_SUBNET_DISCOVERY: true ENABLE_V6_EGRESS: true NETWORK_POLICY_ENFORCING_MODE: standard VPC_CNI_VERSION: v1.19.0 VPC_ID: vpc-0e51afbabf731f741 WARM_ENI_TARGET: 1 WARM_PREFIX_TARGET: 1 MY_NODE_NAME: (v1:spec.nodeName) MY_POD_NAME: aws-node-66zp7 (v1:metadata.name) Mounts: /host/etc/cni/net.d from cni-net-dir (rw) /host/opt/cni/bin from cni-bin-dir (rw) /host/var/log/aws-routed-eni from log-dir (rw) /run/xtables.lock from xtables-lock (rw) /var/run/aws-node from run-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c8cnf (ro) aws-eks-nodeagent: Container ID: containerd://b3846669666be5c8cb87cd0523e44f17021eaff9562702fc9f14b432d5b4c66e Image: 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-network-policy-agent:v1.1.5-eksbuild.1 Image ID: 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-network-policy-agent@sha256:f3280f090b6c5d3128357d8710db237931f5e1089e8017ab3d9cece429d77954 Port: <none> Host Port: <none> Args: --enable-ipv6=false --enable-network-policy=false --enable-cloudwatch-logs=false --enable-policy-event-logs=false --log-file=/var/log/aws-routed-eni/network-policy-agent.log --metrics-bind-addr=:8162 --health-probe-bind-addr=:8163 --conntrack-cache-cleanup-period=300 State: Running Started: Sun, 24 Nov 2024 13:27:33 -0800 Ready: True Restart Count: 0 Requests: cpu: 25m Environment: MY_NODE_NAME: (v1:spec.nodeName) Mounts: /host/opt/cni/bin from cni-bin-dir (rw) /sys/fs/bpf from bpf-pin-path (rw) /var/log/aws-routed-eni from log-dir (rw) /var/run/aws-node from run-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c8cnf (ro) Conditions: Type Status PodReadyToStartContainers True Initialized True Ready True ContainersReady True PodScheduled True Volumes: bpf-pin-path: Type: HostPath (bare host directory volume) Path: /sys/fs/bpf HostPathType: cni-bin-dir: Type: HostPath (bare host directory volume) Path: /opt/cni/bin HostPathType: cni-net-dir: Type: HostPath (bare host directory volume) Path: /etc/cni/net.d HostPathType: log-dir: Type: HostPath (bare host directory volume) Path: /var/log/aws-routed-eni HostPathType: DirectoryOrCreate run-dir: Type: HostPath (bare host directory volume) Path: /var/run/aws-node HostPathType: DirectoryOrCreate xtables-lock: Type: HostPath (bare host directory volume) Path: /run/xtables.lock HostPathType: FileOrCreate kube-api-access-c8cnf: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: Burstable Node-Selectors: <none> Tolerations: op=Exists node.kubernetes.io/disk-pressure:NoSchedule op=Exists node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/network-unavailable:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists node.kubernetes.io/pid-pressure:NoSchedule op=Exists node.kubernetes.io/unreachable:NoExecute op=Exists node.kubernetes.io/unschedulable:NoSchedule op=Exists Events: <none>