Open ahawkins opened 5 years ago
I have a same issue.
My Case, Fixed version
is not worked.
Better fixed version
is worked.
$ aws --version
aws-cli/1.16.200 Python/3.7.4 Darwin/17.7.0 botocore/1.12.190
@ahawkins - Thank you for your post. When i run this command aws eks update-kubeconfig --name test
i got this output in my ~/.kube/config
file:
users:
- name: arn:aws:eks:us-west-2:102809180856:cluster/test
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-west-2
- eks
- get-token
- --cluster-name
- test
command: aws
I am wondering how you got the previous 2 output. Can you please give me the debug log of the previous 2 output. You can enable the debug log by adding --debug
to your command.
@swetashre include --profile
or try with the AWS_DEFAULT_PROFILE
environment variable.
Would be great to see this fixed. I have the same use case: accessing a cluster using two different roles (dev and admin). Currently I have to update the kubeconfig manually and rename the users to avoid an override of the user via the second update-kubeconfig call.
@swetashre the AWS_PROFILE
env is set here https://github.com/aws/aws-cli/blob/develop/awscli/customizations/eks/update_kubeconfig.py#L320
As @ahawkins mentioned, if you invoke the update-kubeconfig
command with a profile set, it will be written into the kubeconfig file under the env
key.
Another problem with this is that if your local environment has AWS_ACCESS_KEY_ID
(or similar set), then they will override the AWS_PROFILE environment variable. Therefore it would be better to pass the profile using the --profile
argument in this case as well.
aws eks update-kubeconfig
is generating a command with theAW_PROFILE
environment variable. This is the incorrect variable to setting profiles via environment variable. It should beAWS_DEFAULT_PROFILE
. Editing~/.kube/config
manually fixed my cluster access issue.Here's the broken sample:
Fixed version:
Better fixed version: