kubernetes-sigs / aws-load-balancer-controller

A Kubernetes controller for Elastic Load Balancers
https://kubernetes-sigs.github.io/aws-load-balancer-controller/
Apache License 2.0
3.96k stars 1.47k forks source link

[v2] Overly permissive/redundant IAM permission #1647

Open mike-stewart opened 4 years ago

mike-stewart commented 4 years ago

Related to #1302. Reviewing the required IAM policy for the v2 controller, it appears that there is some duplication.

There are two statements providing "ec2:AuthorizeSecurityGroupIngress" and "ec2:RevokeSecurityGroupIngress". This more restricted statement that checks for resource tags, and this more permissive statement that does not.

Can the more permissive one be deleted in favour of the more restricted one? Since AuthorizeSecurityGroupIngress is a fairly sensitive permission, it would be great to be able to lock that down further.

M00nF1sh commented 4 years ago

@mike-stewart we layout the permission like so because they are for different purpose and can be scoped down differently. (we are planning a guide on how to further scope down these permissions). the first statement(without resourceTags) is let controller modify worker node security groups, which can be further scope down by vpc-id and kubernetes.io/cluster/cluster-name: owned/shared tag. the second statement(with resourceTag) is let controller modify the securityGroup created for ALB. which can be further scope down with vpc-id and elbv2.k8s.aws/cluster: cluster-name tag.

mike-stewart commented 4 years ago

@M00nF1sh Makes sense! Is there an existing issue for the guide on scoping down the permissions? If so, we could close this issue in favour of that one.

M00nF1sh commented 4 years ago

@mike-stewart we don't have a guide about how to further scope down the permissions yet. Which we'll work on soon. will update this issue once we have it :D

zhujik commented 3 years ago

I'll just highjack this issue to tell you my findings, because we are migrating to v2 and the iam permissions are a bit different.

There are some iam permissions that are now missing that have been used in the previous v1 version, namely:

For the migration process, for the time being, we have re-included them in the iam policy to avoid any unforseen consequences. So I would just kindly ask if you left them out intentionally or if the might have been omitted accidentally, because the git history of the iam-policy.json does not indicate the history of the file in regards to the v1 version.

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

mike-stewart commented 3 years ago

/remove-lifecycle stale

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

mike-stewart commented 3 years ago

/remove-lifecycle stale

k8s-triage-robot commented 3 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

mike-stewart commented 3 years ago

/remove-lifecycle stale

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

mike-stewart commented 2 years ago

/remove-lifecycle stale

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

mike-stewart commented 2 years ago

/remove-lifecycle stale

kishorj commented 2 years ago

@mike-stewart, please refer to the v2.4 live docs https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/deploy/installation/#setup-iam-role-for-service-accounts for scoping the IAM permissions.

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

mike-stewart commented 2 years ago

@kishorj @M00nF1sh Thanks for updating the documentation for the AuthorizeSecurityGroupIngress policy I originally asked about.

I have a few more questions about how to scope down this policy, in particular:

I think it would be great if the default policy packaged with this app could follow the principle of least privilege to the extent possible without affecting the out-of-the-box install experience of users. If you could broadly apply the "aws:ResourceTag" Null check conditions in the default policy, that would give users an easy path to change those tag conditions from Null check to StringEquals to really lock this down. As it stands, it's not clear if those can be safely scoped down without thorough testing.

/remove-lifecycle stale

kpersohn commented 2 years ago

I'm also curious if it's safe to scope these to specific clusters and if so why not just make that the default? I'm more than happy to submit a PR if it would be a welcome contribution.

I saw the null condition check for "aws:ResourceTag/kubernetes.io/cluster/CLUSTER-NAME": "false" recommended in the docs for adding/removing rules to security groups and just applied that everywhere there were conditions on aws:ResourceTag/elbv2.k8s.aws/cluster. In basic testing, this seems to work fine but not sure if there's a gotcha I haven't encountered yet.

I agree with @mike-stewart it would be great if the out of box policy was scoped to ownership based on specific cluster name either via StringLike on the value of the generic cluster tag or null conditions on the standard tag where the cluster name is in the key (don't think it really matters which).

For example:

       {
            "Effect": "Allow",
            "Action": [
                "elasticloadbalancing:AddTags",
                "elasticloadbalancing:RemoveTags"
            ],
            "Resource": [
                "arn:aws:elasticloadbalancing:*:*:targetgroup/*/*",
                "arn:aws:elasticloadbalancing:*:*:loadbalancer/net/*/*",
                "arn:aws:elasticloadbalancing:*:*:loadbalancer/app/*/*"
            ],
            "Condition": {
                "Null": {
                    "aws:RequestTag/elbv2.k8s.aws/cluster": "true",
                    "aws:ResourceTag/elbv2.k8s.aws/cluster": "false"
                    "aws:RequestTag/kubernetes.io/cluster/CLUSTER-NAME": "true",
                    "aws:ResourceTag/kubernetes.io/cluster/CLUSTER-NAME": "false"
                }
            }
        },
k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

mike-stewart commented 2 years ago

/remove-lifecycle stale

johngmyers commented 2 years ago

Instead of documentation, perhaps we could provide a tool which takes a cluster name and generates a policy scoped to that cluster?

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

mike-stewart commented 1 year ago

/remove-lifecycle stale

grounded042 commented 1 year ago

👋 I agree with the sentiment here. It would be great if better direction was given as to the IAM conditions. Not owning the code it's hard to add in conditions and know if they'll eventually break something or not.

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 1 year ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/1647#issuecomment-1635025798): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
mb-shorai commented 1 year ago

I guess the issue is still relevant and should be reopened, there is no information regarding restricting these overly permissive IAM permissions except for the single ec2:RevokeSecurityGroupIngress/ec2:AuthorizeSecurityGroupIngress statement.

cc @M00nF1sh @kishorj

mike-stewart commented 1 year ago

/reopen

k8s-ci-robot commented 1 year ago

@mike-stewart: Reopened this issue.

In response to [this](https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/1647#issuecomment-1764430010): >/reopen Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
mike-stewart commented 1 year ago

/remove-lifecycle rotten

k8s-triage-robot commented 10 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 9 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 8 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 8 months ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/1647#issuecomment-2028027178): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
mike-stewart commented 8 months ago

/reopen

k8s-ci-robot commented 8 months ago

@mike-stewart: Reopened this issue.

In response to [this](https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/1647#issuecomment-2028028217): >/reopen Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
mike-stewart commented 8 months ago

/remove-lifecycle rotten

k8s-triage-robot commented 5 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

mike-stewart commented 5 months ago

/remove-lifecycle stale

k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

mike-stewart commented 2 months ago

/remove-lifecycle stale