Open zedtux opened 4 months ago
AWS SSO gives you temporary credentials so you need to re-authenticate every time your session expires, that's expected You can get the same response while using the kubectl, that's how it works
You can start Lens wherever you want but you need to get those AWS credentials through the authentication process at first
Thanks for the feature request, we are working on a feature to make this smoother
Using aws-sso-util and a default profile set up like
[default]
region = us-east-1
sso_start_url = https://our-login.awsapps.com/start
sso_region = us-east-1
sso_account_name = Pulse Platform
sso_account_id = 11111111111
sso_role_name = DevOps-Admin
Has worked fine with our engineers for a long time with Lens to access hundreds of clusters in many accounts.
As temporary solution, you can use this workaround in your kube config:
- name: arn:aws:eks:eu-central-1:**************:cluster/eks-stage-01
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- -c
- "aws --profile $AWS_PROFILE eks list-clusters > /dev/null 2>&1 || aws --profile $AWS_PROFILE sso login > /dev/null 2>&1; aws --profile $AWS_PROFILE --region eu-central-1 eks get-token --cluster-name eks-stage-01 --output json"
command: sh
env:
- name: AWS_PROFILE
value: sso
Describe the bug As shown from issue #5605, Lens can access EKS cluster only when it is started from the terminal, which is not ideal.
To Reproduce Steps to reproduce the behavior:
Expected behavior Lens should work fine no matter if it is started from the terminal or another ways.
Environment (please complete the following information):
Kubeconfig: See issue #5605