kubernetes / cloud-provider-aws

Cloud provider for AWS
https://cloud-provider-aws.sigs.k8s.io/
Apache License 2.0
384 stars 301 forks source link

loadBalancerSourceRanges changes not propagated to the SG #580

Open pasdam opened 1 year ago

pasdam commented 1 year ago

What happened: We updated the loadBalancerSourceRanges property of an existing service, but the new IPs were not propagated to the SG, as a result we can't connect from new IPs

What you expected to happen: the new IPs should be propagated to the node security group

How to reproduce it (as minimally and precisely as possible):

  1. Create LoadBalancer service with service.beta.kubernetes.io/aws-load-balancer-type: nlb annotation and 1 element in loadBalancerSourceRanges
  2. Check SG in the cloud
  3. Add 2nd element to loadBalancerSourceRanges
  4. Check SG in the cloud.

Anything else we need to know?:

Environment:

/kind bug

k8s-ci-robot commented 1 year ago

This issue is currently awaiting triage.

If cloud-provider-aws contributors determine this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
pasdam commented 1 year ago

It's similar to #36, but I couldn't reopen that one

kmala commented 1 year ago

I couldn't reproduce the issue, can you share the service specification which you are using for this?

pasdam commented 1 year ago

Do you mean this

apiVersion: v1
kind: Service
metadata:
  annotations:
    meta.helm.sh/release-name: gloo
    meta.helm.sh/release-namespace: gloo-system
    service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: "5"
    service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true"
    service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: <s3-name>
    service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: api
    service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: Environment=staging,Name=api,GitHubRepo=gloo
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "2"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "10"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: /envoy-hc
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: traffic-port
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: HTTP
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "6"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "2"
    service.beta.kubernetes.io/aws-load-balancer-name: api
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: <ACM certificate ARN>
    service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-TLS-1-2-Ext-2018-06
    service.beta.kubernetes.io/aws-load-balancer-type: nlb
  creationTimestamp: "2023-03-09T06:51:29Z"
  finalizers:
  - service.kubernetes.io/load-balancer-cleanup
  labels:
    app: gloo
    app.kubernetes.io/managed-by: Helm
    gateway-proxy-id: gateway-proxy-api
    gloo: gateway-proxy
  name: gateway-proxy-api
  namespace: gloo-system
  resourceVersion: "217407260"
  uid: f41fc3a1-fda2-4668-b90b-dc0866b5f5a6
spec:
  allocateLoadBalancerNodePorts: true
  clusterIP: <cluster-ip>
  clusterIPs:
  - <cluster-ip>
  externalTrafficPolicy: Local
  healthCheckNodePort: 32392
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  loadBalancerSourceRanges:
  - <list-of-whitelisted-ips>
  ports:
  - name: http
    nodePort: 32344
    port: 80
    protocol: TCP
    targetPort: 8080
  - name: https
    nodePort: 31099
    port: 443
    protocol: TCP
    targetPort: 8080
  selector:
    gateway-proxy: live
    gateway-proxy-id: gateway-proxy-api
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - hostname: <LB-host-name>
kmala commented 1 year ago

Thanks for sharing but unfortunately i couldn't reproduce with similar configs? Is the issue consistent for you?

pasdam commented 1 year ago

I can't easily test it now, but yeah it was consistent when we were looking into it, we ended up recreating it, which was annoying, and in some cases might even be impossible

k8s-triage-robot commented 7 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 4 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

pasdam commented 4 months ago

/remove-lifecycle stale

k8s-triage-robot commented 1 month ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 3 weeks ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten