kubernetes / cloud-provider-aws

Cloud provider for AWS
https://cloud-provider-aws.sigs.k8s.io/
Apache License 2.0
395 stars 304 forks source link

AWS node/instance security group - misuse of cluster tag #27

Closed Benjamin-Dobell closed 4 years ago

Benjamin-Dobell commented 5 years ago

Cross post of k8s issue. Aside: Is there an official policy on where cloud provider issues should be opened?


The AWS cloud provider cluster tag, which defines ownership semantics, is presently being utilised improperly for unrelated purposes. In particular, attempting to identify which instance security groups should be updated to allow inbound load balancer traffic.

This means we (or our controllers) are unable to attach additional security groups to our instances (without "leaking" security group resources).

Please refer to the upstream issue for further details kubernetes/kubernetes#73906

Pratima commented 5 years ago

Completely agree. Would like to bump this post. We can't use istio nlb ingress with our worker pools that have multiple security groups. We use Terraform AWS EKS module to setup our cluster.

Benjamin-Dobell commented 5 years ago

Looks like the cluster-api-provider-aws has new tags:

Labels for Cluster API managed infrastructure and cloud-provider managed infrastructure overlapped. The breaking change introduces a new label for Cluster API to use as well as a tool to convert labels on existing clusters to the new format.

I'm still using Kops, but keen to migrate.

However, it looks like the cloud-provider still documents the owned cluster tag as being tied to the lifecycle of the cluster:

    // ResourceLifecycleOwned is the value we use when tagging resources to indicate
    // that the resource is considered owned and managed by the cluster,
    // and in particular that the lifecycle is tied to the lifecycle of the cluster.
        ResourceLifecycleOwned = "owned"

Perhaps the intention is that this tag now signifies ownership by the cloud-provider (rather than the cluster)?

Would appreciate it if a maintainer could chime in clarifying the situation.

manvendra-singh0x7cd commented 5 years ago

@Benjamin-Dobell Any updates on this ?

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 4 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 4 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 4 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes/cloud-provider-aws/issues/27#issuecomment-572564768): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.