Closed genseb13011 closed 1 year ago
IAM path prefixes are not supported in the aws-auth
configmap - you will need to strip the prefix when adding to the configmap
Path - https://github.com/clowdhaus/eks-reference-architecture/blob/f37390db1b38d154979cc1aeb4d72ab53929e847/inferentia/eks.tf#L2 How to strip - https://github.com/clowdhaus/eks-reference-architecture/blob/f37390db1b38d154979cc1aeb4d72ab53929e847/inferentia/eks.tf#L35
Thanks for your anwser, it's working well now!
After removing "aws-reserved/sso.amazonaws.com/eu-west-1" from ARN path, everything were ok
Thanks again
Seb.
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Hi,
I'm facing an issue trying to access my EKS cluster freshly deployed.
Running a kubectl command leads to following error messages: error you must be logged in to the server (unauthorized)
Context:
SSO is deployed in my organization so I'm trying to access the cluster using the SSO role _AWSReservedSSOAWSAdministratorAccess
When applying my terragrunt, I first assume the AWSReservedSSO_AWSAdministratorAccess role, then ressources are created by assuming another role "terragrunt-role". To sum up, EKS cluster is created by the "terragrunt-role" role that can be assume by AWSReservedSSO_AWSAdministratorAccess role. As terragrunt-role is the only one authorize to access the cluster (because it is the creator), I've added to the aws_auth configmap the AWSReservedSSO_AWSAdministratorAccess ARN using following declaration.
_manage_aws_auth_configmap =true aws_auth_roles = [ { rolearn = var.eks_admin_sso_role_arn username = "admin_sso_rolearn" groups = ["system:masters"] }, ]
and below provider declaration
_data "aws_eks_cluster_auth" "default" { name = var.eks_cluster_name }
provider "kubernetes" { host = module.eks.cluster_endpoint cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) token = data.aws_eks_clusterauth.default.token }
The apply works without returning any error but I still can't access my cluster when assuming this role AWSReservedSSO_AWSAdministratorAccess
As mention in this post (https://repost.aws/knowledge-center/eks-api-server-unauthorized-error), I notice the issue described below but don't understand why the issue persist after the add of the role arn in aws_auth_roles:
_"If the issue because your IAM entity isn't mapped in aws-auth ConfigMap, or is incorrectly mapped, then review the aws-auth ConfigMap. Make sure that the IAM entity is mapped correctly and meets the requirements that are listed in the You're not cluster creator section. In this case, the EKS authenticator logs look similar to the following
time="2022-12-28T15:37:19Z" level=warning msg="access denied" arn="arn:aws:iam::XXXXXXXXXX:role/admin-test-role" client="127.0.0.1:33384" error="ARN is not mapped" method=POST path=/authenticate"_
I'd like to understand why my issue still pending while i'm facing no error during terraform deployment. Am I missing something? is it a bug?
Versions:
eks module module "eks" { source = "terraform-aws-modules/eks/aws" version = "19.13.1"
Terraform: 1.3.6
Terragrunt: 0.38.9
eks cluster: 1.24
kubectl: 1.27.1
Thanks
Seb.