Open andrey-odeeo opened 3 months ago
Hi @andrey-odeeo, could you please share how the AWS
credentials are being supplied?
Hi @andrey-odeeo, could you please share how the
AWS
credentials are being supplied?
So the credentials of the main account from which I need to assume the account that helm should be using is supplied by exporting in the terminal the AWS_* variables. So basically it looks like following:
export AWS_ACCESS_KEY_ID="xxxxx"
export AWS_SECRET_ACCESS_KEY="xxxxx"
export AWS_SESSION_TOKEN=xxxxx"
terraform apply
I am facing the same issue with all the regions except us-east-1
I had the same issue, it turns out aws_auth was not updated with the role I tried to use.
notice that aws eks get-token
command will always return token, even if cluster doesn't exist.
I'm using multi-account strategy in AWS and creating AWS resources with an assumed role. I would like also to assume this role by helm provider using exec plugin, but for some reason it doesn't work.
Terraform, Provider, Kubernetes and Helm Versions
Affected Resource(s)
Terraform Configuration Files
Debug Output
Error: Kubernetes cluster unreachable: the server has asked for the client to provide credentials
Expected Behavior
Use assumed role and authenticate in EKS
Actual Behavior
Can't authenticate
Important Factoids
If I take the command and put in the same terminal where I run terraform plan, I receive the token
If I create a profile in ~/.aws/credentials and use --profile instead of --role-arn - it works for example:
I also tried to pass environment directly using "env" block inside exec - it didn't help either.