Describe the bug
The Port Range Backend Security Group Rules are not generated when creating ingress with ingressClassName: alb.
The frontend security group, k8s-traffic--, has been auto-generated and attached to the ALB and also -node- attached to the EC2 instance.
However, the port range Backend Security Group Rules (with description "elbv2.k8s.aws/targetGroupBinding=shared") in Security group -node- does not exist.
Lack of the port range backend security group rules, the LB target health becomes "Unhealthy" and I cannot access the service. When I add a SecurityGroup rule from the frontend LB to the Kubernetes node SecurityGroup manually, the health status becomes healthy and the service becomes accessible.
Here are the generated Backend Security Group Rules in Security Group -node-, but no port range rule (The one has description elbv2.k8s.aws/targetGroupBinding=shared).
Security group rule ID, Type, Protocol, Port range, Source, Description
sgr-07aeb2e56a71192ec, DNS (TCP), TCP 53, sg-0292e3d0726f7e77a / cicd-eks-dev-node-20240603084514546900000002, Node to node CoreDNS
sgr-07af79604c3736ae4, Custom TCP, TCP 9443, sg-0a0566b426cf8006b / cicd-eks-dev-cluster-20240603084516488800000003, Cluster API to node 9443/tcp webhook
sgr-0f1fea45e2682398f, Custom TCP, TCP 10250, sg-0a0566b426cf8006b / cicd-eks-dev-cluster-20240603084516488800000003, Cluster API to node kubelets
sgr-03e08966c55d3b57f, Custom TCP, TCP 1025 - 65535, sg-0292e3d0726f7e77a / cicd-eks-dev-node-20240603084514546900000002, Node to node ingress on ephemeral ports
sgr-087ea4cca9c431c6d, Custom TCP, TCP 6443, sg-0a0566b426cf8006b / cicd-eks-dev-cluster-20240603084516488800000003, Cluster API to node 6443/tcp webhook
sgr-0b8221fe2ca8b83da, Custom TCP, TCP 4443, sg-0a0566b426cf8006b / cicd-eks-dev-cluster-20240603084516488800000003, Cluster API to node 4443/tcp webhook
sgr-0b06cb8157f2c26a9, Custom TCP, TCP 8443, sg-0a0566b426cf8006b / cicd-eks-dev-cluster-20240603084516488800000003, Cluster API to node 8443/tcp webhook
sgr-0188ff71a51b337db, HTTPS TCP, 443, sg-0a0566b426cf8006b / cicd-eks-dev-cluster-20240603084516488800000003, Cluster API to node groups
sgr-0d8375e3a42c3d671, DNS (UDP), UDP 53, sg-0292e3d0726f7e77a / cicd-eks-dev-node-20240603084514546900000002, Node to node CoreDNS UDP
kubectl get events -A --field-selector type!=Normal
NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE
sonarqube 11m Warning FailedNetworkReconcile targetgroupbinding/k8s-sonarqub-sonarqub-07de3f6bf7 expected exactly one securityGroup tagged with kubernetes.io/cluster/cicd-eks-dev for eni eni-066f0709ddc6e0811, got: [sg-0292e3d0726f7e77a sg-0609012206beefad0] (clusterName: cicd-eks-dev)
Describe the bug The Port Range Backend Security Group Rules are not generated when creating ingress with ingressClassName: alb.
The frontend security group, k8s-traffic--, has been auto-generated and attached to the ALB and also -node- attached to the EC2 instance.
However, the port range Backend Security Group Rules (with description "elbv2.k8s.aws/targetGroupBinding=shared") in Security group-node- does not exist.
Lack of the port range backend security group rules, the LB target health becomes "Unhealthy" and I cannot access the service. When I add a SecurityGroup rule from the frontend LB to the Kubernetes node SecurityGroup manually, the health status becomes healthy and the service becomes accessible.
Here are the generated Backend Security Group Rules in Security Group-node-, but no port range rule (The one has description elbv2.k8s.aws/targetGroupBinding=shared).
Steps to reproduce
Expected outcome Application deployed with AWS ALB created and has target has health status 'Healthy'
Environment
Additional Context:
Ingress settings in Helm values,
Log