Open mike-stewart opened 4 years ago
@mike-stewart
we layout the permission like so because they are for different purpose and can be scoped down differently. (we are planning a guide on how to further scope down these permissions).
the first statement(without resourceTags) is let controller modify worker node security groups, which can be further scope down by vpc-id
and kubernetes.io/cluster/cluster-name: owned/shared
tag.
the second statement(with resourceTag) is let controller modify the securityGroup created for ALB. which can be further scope down with vpc-id
and elbv2.k8s.aws/cluster: cluster-name
tag.
@M00nF1sh Makes sense! Is there an existing issue for the guide on scoping down the permissions? If so, we could close this issue in favour of that one.
@mike-stewart we don't have a guide about how to further scope down the permissions yet. Which we'll work on soon. will update this issue once we have it :D
I'll just highjack this issue to tell you my findings, because we are migrating to v2 and the iam permissions are a bit different.
There are some iam permissions that are now missing that have been used in the previous v1 version, namely:
For the migration process, for the time being, we have re-included them in the iam policy to avoid any unforseen consequences. So I would just kindly ask if you left them out intentionally or if the might have been omitted accidentally, because the git history of the iam-policy.json does not indicate the history of the file in regards to the v1 version.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
@mike-stewart, please refer to the v2.4 live docs https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/deploy/installation/#setup-iam-role-for-service-accounts for scoping the IAM permissions.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
@kishorj @M00nF1sh Thanks for updating the documentation for the AuthorizeSecurityGroupIngress policy I originally asked about.
I have a few more questions about how to scope down this policy, in particular:
"Null": {"aws:ResourceTag/elbv2.k8s.aws/cluster": "false"}
condition to make it more secure by default? https://github.com/kubernetes-sigs/aws-load-balancer-controller/blob/9c79e45dd4b8cf94b97a1c6737bf1d47306460d0/docs/install/iam_policy.json#L68-L75I think it would be great if the default policy packaged with this app could follow the principle of least privilege to the extent possible without affecting the out-of-the-box install experience of users. If you could broadly apply the "aws:ResourceTag" Null check conditions in the default policy, that would give users an easy path to change those tag conditions from Null check to StringEquals to really lock this down. As it stands, it's not clear if those can be safely scoped down without thorough testing.
/remove-lifecycle stale
I'm also curious if it's safe to scope these to specific clusters and if so why not just make that the default? I'm more than happy to submit a PR if it would be a welcome contribution.
I saw the null condition check for "aws:ResourceTag/kubernetes.io/cluster/CLUSTER-NAME": "false"
recommended in the docs for adding/removing rules to security groups and just applied that everywhere there were conditions on aws:ResourceTag/elbv2.k8s.aws/cluster
. In basic testing, this seems to work fine but not sure if there's a gotcha I haven't encountered yet.
I agree with @mike-stewart it would be great if the out of box policy was scoped to ownership based on specific cluster name either via StringLike on the value of the generic cluster tag or null conditions on the standard tag where the cluster name is in the key (don't think it really matters which).
For example:
{
"Effect": "Allow",
"Action": [
"elasticloadbalancing:AddTags",
"elasticloadbalancing:RemoveTags"
],
"Resource": [
"arn:aws:elasticloadbalancing:*:*:targetgroup/*/*",
"arn:aws:elasticloadbalancing:*:*:loadbalancer/net/*/*",
"arn:aws:elasticloadbalancing:*:*:loadbalancer/app/*/*"
],
"Condition": {
"Null": {
"aws:RequestTag/elbv2.k8s.aws/cluster": "true",
"aws:ResourceTag/elbv2.k8s.aws/cluster": "false"
"aws:RequestTag/kubernetes.io/cluster/CLUSTER-NAME": "true",
"aws:ResourceTag/kubernetes.io/cluster/CLUSTER-NAME": "false"
}
}
},
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Instead of documentation, perhaps we could provide a tool which takes a cluster name and generates a policy scoped to that cluster?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
👋 I agree with the sentiment here. It would be great if better direction was given as to the IAM conditions. Not owning the code it's hard to add in conditions and know if they'll eventually break something or not.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
I guess the issue is still relevant and should be reopened, there is no information regarding restricting these overly permissive IAM permissions except for the single ec2:RevokeSecurityGroupIngress
/ec2:AuthorizeSecurityGroupIngress
statement.
cc @M00nF1sh @kishorj
/reopen
@mike-stewart: Reopened this issue.
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
/reopen
@mike-stewart: Reopened this issue.
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Related to #1302. Reviewing the required IAM policy for the v2 controller, it appears that there is some duplication.
There are two statements providing "ec2:AuthorizeSecurityGroupIngress" and "ec2:RevokeSecurityGroupIngress". This more restricted statement that checks for resource tags, and this more permissive statement that does not.
Can the more permissive one be deleted in favour of the more restricted one? Since AuthorizeSecurityGroupIngress is a fairly sensitive permission, it would be great to be able to lock that down further.