hashicorp / terraform-provider-helm

Terraform Helm provider
https://www.terraform.io/docs/providers/helm/
Mozilla Public License 2.0
1.01k stars 371 forks source link

Assuming a role for helm provider doesn't work as expected #1447

Open andrey-odeeo opened 3 months ago

andrey-odeeo commented 3 months ago

I'm using multi-account strategy in AWS and creating AWS resources with an assumed role. I would like also to assume this role by helm provider using exec plugin, but for some reason it doesn't work.

Terraform, Provider, Kubernetes and Helm Versions

Terraform version: v1.9.3
Provider version: v2.14.0
Kubernetes version: v1.30.3

Affected Resource(s)

Terraform Configuration Files

data "aws_eks_cluster" "adserver" {
  name = "adserver"
}

provider "helm" {
  kubernetes {
    host = data.aws_eks_cluster.adserver.endpoint
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.adserver.certificate_authority[0].data)

    exec {
      api_version = "client.authentication.k8s.io/v1beta1"
      command = "aws"
      args    = [
        "eks",
        "get-token",
        "--cluster-name",
        "adserver",
        "--role-arn",
        "arn:aws:iam::000000000000:role/TerraformToAdmin",
      ]
    }
  }
}

resource "helm_release" "nginx" {
  name       = "nginx"
  repository = "https://charts.bitnami.com/bitnami"
  chart      = "nginx"
}

Debug Output

Error: Kubernetes cluster unreachable: the server has asked for the client to provide credentials

Expected Behavior

Use assumed role and authenticate in EKS

Actual Behavior

Can't authenticate

Important Factoids

If I take the command and put in the same terminal where I run terraform plan, I receive the token

aws eks get-token --cluster-name adserver --role-arn arn:aws:iam::000000000000:role/TerraformToAdmin

{
    "kind": "ExecCredential",
    "apiVersion": "client.authentication.k8s.io/v1beta1",
    "spec": {},
    "status": {
        "expirationTimestamp": "2024-07-30T18:02:27Z",
        "token": "k8s-aws-v1.xxxxxxxx"
    }
}

If I create a profile in ~/.aws/credentials and use --profile instead of --role-arn - it works for example:

provider "helm" {
  kubernetes {
    host = data.aws_eks_cluster.adserver.endpoint
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.adserver.certificate_authority[0].data)

    exec {
      api_version = "client.authentication.k8s.io/v1beta1"
      command = "aws"
      args    = [
        "eks",
        "get-token",
        "--cluster-name",
        "adserver",
        "--profile",
        "xyzprofile",
      ]
    }
  }
}

I also tried to pass environment directly using "env" block inside exec - it didn't help either.

sheneska commented 3 months ago

Hi @andrey-odeeo, could you please share how the AWS credentials are being supplied?

andrey-odeeo commented 3 months ago

Hi @andrey-odeeo, could you please share how the AWS credentials are being supplied?

So the credentials of the main account from which I need to assume the account that helm should be using is supplied by exporting in the terminal the AWS_* variables. So basically it looks like following:

export AWS_ACCESS_KEY_ID="xxxxx"
export AWS_SECRET_ACCESS_KEY="xxxxx"
export AWS_SESSION_TOKEN=xxxxx"
terraform apply
rohitelite commented 3 months ago

I am facing the same issue with all the regions except us-east-1

AvihaiSam commented 1 month ago

I had the same issue, it turns out aws_auth was not updated with the role I tried to use. notice that aws eks get-token command will always return token, even if cluster doesn't exist.