kubernetes-sigs / aws-load-balancer-controller

A Kubernetes controller for Elastic Load Balancers
https://kubernetes-sigs.github.io/aws-load-balancer-controller/
Apache License 2.0
3.93k stars 1.46k forks source link

external-managed-tags gets ignored, custom NLB Listener still gets deleted in reconciliation #3794

Open enobil opened 2 months ago

enobil commented 2 months ago

Describe the bug My purpose is creating some additional NLB listener on port 22 on an NLB created by AWS LB ingress controller. I'm using external-managed-tags configuration to avoid reconciliation deleting this additional NLB listener.

I'm using external-managed-tags configuration such as --external-managed-tags=custom-managed. I verified the arguments are specified to the pod successfully: image

I made sure my custom NLB listener has a tag with name "custom-managed" with value "true". However the ingress controller reconciliation still deletes the custom NLB listener. I can see the pod log like this from the ingress controller pod log:

{"level":"info","ts":"2024-08-02T09:28:07Z","logger":"controllers.service","msg":"deleted listener","arn":"arn:aws:elasticloadbalancing:eu-central-1:.......:listener/net/k8s-myproject-lidarnlb-c389e979a2/8ca53803cfcbeea6/06fba5d2d9f376d1"}

Steps to reproduce Install aws load balancer ingress controller via aws-load-balancer-controller helm chart

helm upgrade "${HELM_RELEASE}" "artifactory/${HELM_AWSLBC_CHART_NAME}" \
  --install \
  --namespace "${EKS_AWSLBC_NAMESPACE}" \
  --version "${HELM_AWSLBC_CHART_VERSION}" \
  --set "clusterName=${EKS_CLUSTER}" \
  --set "serviceAccount.create=false" \
  --set "serviceAccount.name=${EKS_AWSLBC_SERVICE_ACCOUNT_NAME}" \
  --set "image.repository=${DOCKER_IMAGE_REGISTRY}/${DOCKER_AWSLBC_IMAGE_NAME}" \
  --set "imagePullSecrets[0].name=${DOCKER_IMAGE_PULL_SECRET_NAME}" \
  --set "externalManagedTags[0]=custom-managed" \
  --set "syncPeriod=0h10m0s" \
  --atomic \
  --wait \
  --timeout 5m0s

Please pay attention to --set "externalManagedTags[0]=custom-managed" \ part. I also tried as --set "externalManagedTags=custom-managed" \ that gives the same result too. I also reduced sync period to 10 minutes to not wait for 10 hours everytime I test.

Then create a NLB by creating a service like below

apiVersion: v1
kind: Service
metadata:
  namespace: {{ .Release.Namespace }}
  name: lidar-nlb
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: external
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: http
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "80"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: /
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: {{ .Values.tls.myCertArn }}
spec:
  type: LoadBalancer
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: 80
    - name: https
      protocol: TCP
      port: 443
      # TLS termination happens at the NLB
      targetPort: 80
    # SSH listener is added via CloudFormation in reverse-ssh.yaml
  selector:
    app: lidar-reverse-proxy

So this NLB by default will only have listeners for port 80 and port 443.

Then create the additional listener on port 22. I use CloudFormation like below but it doesn't matter how it is created, it can be created with AWS CLI or AWS console too:

  NlbListener2:
    Type: AWS::ElasticLoadBalancingV2::Listener
    Properties:
      LoadBalancerArn: !Ref NlbArn
      Port: 22
      Protocol: TCP
      DefaultActions:
        - Type: forward
          TargetGroupArn: !Ref TargetGroup
      # TODO Move tags to here from the shellscript
      # Whenever CFN supports it, see https://github.com/aws-cloudformation/cloudformation-coverage-roadmap/issues/1460

After this CFN deployment I run a small AWS CLI script similar to below essentially

aws elbv2 add-tags \
  --resource-arns $LISTENER_ARN \
  --tags "Key=custom-managed,Value=true" "Key=Name,Value=ReverseSSH-TargetGroup-${STAGE}"

Then I verified the tag is properly set on the port 22 listener:

image

Wait for 10-20 minutes, until the next time reconciliation happens.

Port 22 listener will be gone.

Expected outcome Port 22 listener in the reproduction steps shouldn't be deleted as per documentation, as per the purpose of this external-managed-tags configuration feature.

Environment

Additional Context:

M00nF1sh commented 2 months ago

@enobil We currently don't support to partially manage a LB resource. The --external-managed-tags=custom-managed commandLine flag is for another feature, where the controller will keep the AWS tag custom-managed on AWS resources it created, otherwise it will reconcile all tags on AWS resources.

M00nF1sh commented 2 months ago

/kind feature Support to partially manage an LB resource.