Open mrichman opened 5 years ago
Didn't read, you can only add roles and users.
I think this is a duplicate of # 157 (which is probably a duplicate of another, honestly)
Can we re-open this as a feature request? Managing permissions would be significantly improved if we could add groups.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Is there any progress on this one? This is significant functionality for managing access to EKS for larger Engineering groups, right now this is requiring me to list out a bunch of users and adding to the list every time someone new needs access.
With terraform 12 this can be easily workarounded, thanks to the for loop. A very improbable code:
locals {
k8s_admins = [
for user in data.terraform_remote_state.iam.outputs.admin_members[0] :
{
user_arn = join("", ["arn:aws:iam::whatever:user/", user])
username = user
group = "system:masters"
}
]
k8s_developers = [
for user in data.terraform_remote_state.iam.outputs.developers_members[0] :
{
user_arn = join("", ["arn:aws:iam::whatever:user/", user])
username = user
group = "system:developers-write"
}
]
k8s_map_users = concat(local.k8s_admins, local.k8s_developers)
}
Is there any progress on this one? This is significant functionality for managing access to EKS for larger Engineering groups, right now this is requiring me to list out a bunch of users and adding to the list every time someone new needs access.
It's even more fun when you don't have IAM users and everybody accesses an assumed role session via Okta.
With terraform 12 this can be easily workarounded, thanks to the for loop. A very improbable code:
locals { k8s_admins = [ for user in data.terraform_remote_state.iam.outputs.admin_members[0] : { user_arn = join("", ["arn:aws:iam::whatever:user/", user]) username = user group = "system:masters" } ] k8s_developers = [ for user in data.terraform_remote_state.iam.outputs.developers_members[0] : { user_arn = join("", ["arn:aws:iam::whatever:user/", user]) username = user group = "system:developers-write" } ] k8s_map_users = concat(local.k8s_admins, local.k8s_developers) }
I'm struggling to put the k8s_admins generated here into the configmap to apply later in automation, how did you manage to do that?
/remove-lifecycle stale
With terraform 12 this can be easily workarounded, thanks to the for loop. A very improbable code:
locals { k8s_admins = [ for user in data.terraform_remote_state.iam.outputs.admin_members[0] : { user_arn = join("", ["arn:aws:iam::whatever:user/", user]) username = user group = "system:masters" } ] k8s_developers = [ for user in data.terraform_remote_state.iam.outputs.developers_members[0] : { user_arn = join("", ["arn:aws:iam::whatever:user/", user]) username = user group = "system:developers-write" } ] k8s_map_users = concat(local.k8s_admins, local.k8s_developers) }
I'm struggling to put the k8s_admins generated here into the configmap to apply later in automation, how did you manage to do that?
probably using jsonencode(local.k8s_admins)
I would also like to see adeakrvbd's question answered above. +1
Me too. It's really weird to not support IAM groups.
Please consider adding IAM group support for EKS. This would be the easiest way to manage user access control by far.
Till we have this, I have come up with a strategy using AssumeRole which I describe in my blog post.
@prestonvanloon I know we discussed in a different thread, but I basically do the same thing that @amitsaha describes in his blog post, although mine seems a bit simpler:
I'll obviously automate the last step so people don't have to run the commands and set the keys, but yeah, that's pretty much it. It's not pretty, but it allows me to abstract user management into a group which is really the goal here since IAM Groups are still not supported.
@adeakrvbd @jclynny I used yamlencode
to get this working. You can see more code here https://github.com/dockup/terraform-aws/commit/fd8c679a8bdea533a903d5cb12b8aa7d41c5b632#diff-a338da04c3bdfe4c0e6b5db98bc233bdR93
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
@adeakrvbd @jclynny here's a complete example how I build the userMap in terraform 0.12 based on IAM users bound to IAM groups.
data "aws_iam_group" "developer-members" {
group_name = "developer"
}
data "aws_iam_group" "admin-members" {
group_name = "admin"
}
locals {
k8s_admins = [
for user in data.aws_iam_group.admin-members.users :
{
user_arn = user.arn
username = user.user_name
groups = ["system:masters"]
}
]
k8s_analytics_users = [
for user in data.aws_iam_group.developer-members.users :
{
user_arn = user.arn
username = user.user_name
groups = ["company:developer-users"]
}
]
k8s_map_users = concat(local.k8s_admins, local.k8s_analytics_users)
}
resource "kubernetes_config_map" "aws_auth" {
metadata {
name = "aws-auth"
namespace = "kube-system"
}
data = {
mapRoles = <<YAML
- rolearn: ${module.eks.eks_worker_node_role_arn}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
YAML
mapUsers = yamlencode(local.k8s_map_users)
mapAccounts = <<YAML
- "${data.aws_caller_identity.current.account_id}"
YAML
}
}
The downside is that you will need to run "terraform apply" each time you add or remove users from IAM groups and that one user shouldn't be in more than one group at a time.
I came across this: https://github.com/ygrene/iam-eks-user-mapper. Maybe this is a viable workaround for you?
/kind feature /lifecycle frozen
/remove-lifecycle stale
please reconsider support for IAM group!
I can get it working with either IAM role or IAM user with tf. However, for our use case where we're trying to use Hashicorp Vault to grant dynamic time-bound access longer than 12hours (which is the max session duration for IAM role based approach), capability to map k8s group to IAM group is key.
If your company is using SSO (via Okta, for example), there are no IAM users and everyone is using assumed roles with temporary credentials. This makes it impossible for our developers to use EKS in a sane way and hits enterprise customers the hardest.
I've created a cluster using @aws-cdk/aws-eks (I believe it would be same for quickstart). It creates a cluster with a dedicated role since EKS has a weird rule:
When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator (with system:masters permissions). https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html
Yesterday, new console has been deployed for EKS. It queries the cluster directly to get nodes and workloads: https://aws.amazon.com/blogs/containers/introducing-the-new-amazon-eks-console/
But I only see the following error with IAM user.
Error loading Namespaces
Unauthorized: Verify you have access to the Kubernetes cluster
I've tried some ways, but I have some nitpicks for every one of them:
mapRole
-> but I have to apply eks:*
policies to it, and switching to a role in a console is a little bit cumbersome.mapUser
every IAM user that has 'eks:DescribeCluster' to some group, and bind them to view
ClusterRole.
-> everymapAccount
-> then I see
Error loading Namespaces
namespaces is forbidden: User "arn:aws:iam::<accountId>:user/<userName>" cannot list resource "namespaces" in API group "" at the cluster scope
mapAccount
and bind every mapped accounts that has 'eks:DescribeCluster' to view
ClusterRole.
-> everyI wish some other ways:
mapAccount
(would be too permissive?)+1 for this feature!
+1 for this feature!
+1 for this feature!
+1
+1
Hey! after a long research on this topic, we've decided to write a blog post covering this issue. In the blog post, we cover most of the possible solutions for this and share the solution that we find to work best.
[
Enabling AWS IAM Group Access to an EKS Cluster Using RBAC](https://eng.grip.security/enabling-aws-iam-group-access-to-an-eks-cluster-using-rbac)
+1
+1
hey hey community 5 years past and we are still asking when it's will be ready ? and when AWS will allow us to manage group as normal teams ?
@adeakrvbd @jclynny here's a complete example how I build the userMap in terraform 0.12 based on IAM users bound to IAM groups.
data "aws_iam_group" "developer-members" { group_name = "developer" } data "aws_iam_group" "admin-members" { group_name = "admin" } locals { k8s_admins = [ for user in data.aws_iam_group.admin-members.users : { user_arn = user.arn username = user.user_name groups = ["system:masters"] } ] k8s_analytics_users = [ for user in data.aws_iam_group.developer-members.users : { user_arn = user.arn username = user.user_name groups = ["company:developer-users"] } ] k8s_map_users = concat(local.k8s_admins, local.k8s_analytics_users) } resource "kubernetes_config_map" "aws_auth" { metadata { name = "aws-auth" namespace = "kube-system" } data = { mapRoles = <<YAML - rolearn: ${module.eks.eks_worker_node_role_arn} username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes YAML mapUsers = yamlencode(local.k8s_map_users) mapAccounts = <<YAML - "${data.aws_caller_identity.current.account_id}" YAML } }
The downside is that you will need to run "terraform apply" each time you add or remove users from IAM groups and that one user shouldn't be in more than one group at a time.
It worked! I just had to change "user_arn" to "userarn". Thanks bro!
props to @guygrip for the inspo tf for creating aws iam / k8s resources for group/role access notes:
gavinbunney/kubectl
, ver "1.14.0" (could probably use kubernetes' k8s_manifest)config/aws-auth
manage_aws_auth_configmap = true
aws_auth_roles = [
{
rolearn = aws_iam_role.cluster-admin-access.arn
username = local.eks-cluster-admin-role-name
groups = [
"system:masters",
"system:bootstrappers",
"system:nodes",
]
},
...
]
below creates:
data "aws_iam_policy_document" "cluster-admin-access" {
statement {
sid = "1"
effect = "Allow"
resources = ["*"]
actions = [
"eks:ListClusters",
"eks:DescribeAddonVersions",
"eks:CreateCluster"
]
}
statement {
sid = "2"
effect = "Allow"
resources = ["arn:aws:eks:${local.region}:${data.aws_caller_identity.current.account_id}:cluster/${module.eks.cluster_name}"]
actions = [
"eks:*"
]
}
}
resource "aws_iam_policy" "cluster-admin-access" {
name = "${local.application}-${local.environment}-eks-cluster-admin-access"
path = "/"
policy = data.aws_iam_policy_document.cluster-admin-access.json
tags = local.tags
}
resource "aws_iam_role" "cluster-admin-access" {
name = "${local.application}-${local.environment}-eks-cluster-admin-access"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
AWS = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:root"
}
}
]
})
tags = local.tags
}
resource "aws_iam_role_policy_attachment" "cluster-admin-access" {
role = aws_iam_role.cluster-admin-access.name
policy_arn = aws_iam_policy.cluster-admin-access.arn
}
data "aws_iam_policy_document" "assume-eks-admin-role" {
statement {
sid = "1"
effect = "Allow"
resources = [aws_iam_role.cluster-admin-access.arn]
actions = ["sts:AssumeRole"]
}
}
resource "aws_iam_policy" "assume-eks-admin-role" {
name = "${local.application}-${local.environment}-eks-admin-assume-role"
path = "/"
policy = data.aws_iam_policy_document.assume-eks-admin-role.json
tags = local.tags
}
resource "aws_iam_group" "cluster-admin-access" {
name = "eks-${local.environment}-admin-access"
path = "/"
}
resource "aws_iam_group_policy_attachment" "cluster-admin-access" {
group = aws_iam_group.cluster-admin-access.name
policy_arn = aws_iam_policy.assume-eks-admin-role.arn
}
resource "kubectl_manifest" "iam-user-group-admin-cluster-role" {
yaml_body = <<-YAML
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ${local.eks-cluster-admin-role-name}
rules:
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["list"]
YAML
}
resource "kubectl_manifest" "iam-user-group-admin-cluster-role-binding" {
yaml_body = <<-YAML
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ${local.eks-cluster-admin-role-name}
subjects:
- kind: User
name: ${local.eks-cluster-admin-role-name}
namespace: kube-system
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: ${local.eks-cluster-admin-role-name}
apiGroup: rbac.authorization.k8s.io
YAML
}
There are ways to use Terraform or SSO, but providing group-level RBAC will have a huge effect in terms of EKS management.
Hello everyone, based on previous comments , i find a way to add users to EKS API which are in eks_iam_group. putting terraform code :
`locals { k8s_developers = toset([for user in data.aws_iam_group.developer_members.users : user.arn]) }
resource "aws_iam_group" "developers_group" { name = "developers_group_dev" }
data "aws_iam_group" "developer_members" { group_name = aws_iam_group.developers_group.name }
resource "aws_eks_access_entry" "developer" { for_each = local.k8s_developers cluster_name = var.cluster_name principal_arn = each.value kubernetes_groups = ["read-only"] }`
Of course you have to create policy for iam_group , wanted to show way to implement it. It's not used by configmap, it's through API , so it will appear in AWS EKS Console via access_entry. Be aware that clusterrole and clusterrolebinding needs to be created inside cluster and it needs to be mapped with access_entry kubernetes_groups inside terraform.
apply yaml manifests through terraform:
data "kubectl_path_documents" "test" { pattern = "./manifests/rbac/*.yaml" }
resource "kubectl_manifest" "config" { for_each = toset(data.kubectl_path_documents.test.documents) yaml_body = each.value }
Providing also clusterrole and clusterrolebinding :
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: developers-cluster-role rules:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: developers-cl-role-binding roleRef: kind: ClusterRole name: developers-cluster-role apiGroup: rbac.authorization.k8s.io subjects:
I have an IAM user named
Alice
, and she's a member of the IAM groupeks-admin
.The following configuration works, but when I remove Alice from
mapUsers
,kubectl
commands give me the errorerror: You must be logged in to the server (Unauthorized)
.Can't I add an IAM group to this ConfigMap, just like I can add a user or role?