Closed md850git closed 2 weeks ago
the AWS provider you have shown is for creating AWS resources - however, your errors are occurring at the Kubernetes/cluster level. I don't see any Kubernetes or Helm providers (nor a reproduction) so it will be hard to say what is mis-configured
in general though - this seems to be an issue with your provider authentication and not with the module
yeah so guessing the kubernetes provider isnt using the assume role from the main calling project
data "aws_eks_cluster" "eks" { name = module.eks.cluster_name depends_on = [ module.eks.eks_managed_node_groups, ] }
data "aws_eks_cluster_auth" "eks" { name = module.eks.cluster_name depends_on = [ module.eks.eks_managed_node_groups, ] }
provider "kubernetes" { host = data.aws_eks_cluster.eks.endpoint cluster_ca_certificate = base64decode(data.aws_eks_cluster.eks.certificate_authority.0.data) token = data.aws_eks_cluster_auth.eks.token }
i have configured like so but im not sure the data resources are using teh assume role correctly from the provider in the main calling project.
i believe this is the equivalent of exec out to the aws cli
yeah so guessing the kubernetes provider isnt using the assume role from the main calling project
I don't know what you mean by this. users have to tell the providers how to authenticate - the module does not do anything in terms of providers or authentication
i figured this: data "aws_eks_cluster" "eks" { name = module.eks.cluster_name depends_on = [ module.eks.eks_managed_node_groups, ] }
data "aws_eks_cluster_auth" "eks" { name = module.eks.cluster_name depends_on = [ module.eks.eks_managed_node_groups, ] }
would be ran using the aws provider config which includes assume role config). provider "aws" { region = "us-east-1" assume_role_with_web_identity { role_arn = "arn:aws:iam::${var.account_id}:role/myrole" session_name = "session name" web_identity_token_file = "token.txt" }
This issue has been automatically marked as stale because it has been open 30 days with no activity. Remove stale label or comment or this issue will be closed in 10 days
This issue was automatically closed because of stale in 10 days
Description
i have built an eks cluster with the latest version of the module and a helm chart on first apply. (im using jenkins in GKE to deploy AWS resources ) using access entries and builds fine. if i then attempt to deploy further helm charts or further resources like a kubernetes resource i get permission errors and it defaults to the service account of the jenkins pod running on GKE. rather than using the web identity which is in the provider config like so:
provider "aws" { region = "us-east-1" assume_role_with_web_identity { role_arn = "arn:aws:iam::${var.account_id}:role/myrole" session_name = "session name" web_identity_token_file = "token.txt" }
ive removed the kubernetes provider as believed that this wasn't needed in v20??. is there a similar setup for helm so that the helm provider isn't needed too?
⚠️ Note
Before you submit an issue, please perform the following first:
Versions
Reproduction Code [Required]
resource "kubernetes_namespace_v1" "this" { metadata { name = "argocd" } }
Steps to reproduce the behavior:
Expected behavior
a k8s namespace is created
Actual behavior
Error: namespaces is forbidden: User "system:serviceaccount:REDACTED" cannot create resource "namespaces" in API group "" at the cluster scope
Terminal Output Screenshot(s)
Error: namespaces is forbidden: User "system:serviceaccount:REDACTED" cannot create resource "namespaces" in API group "" at the cluster scope
Additional context