Open vipan06 opened 7 months ago
By the looks of it you don't have the token inject by the Webhook. The amazon-eks-pod-identity
webhook adds the token in this file /var/run/secrets/eks.amazonaws.com/serviceaccount/token
Please check these env vars is populated AWS_WEB_IDENTITY_TOKEN_FILE
and AWS_ROLE_ARN
. If you don't see these env vars that means the webhook is not doing what it supposed to do. Then next place to look is the Service Account itself. Make sure that you have right annotation.
Hi, I am having the same issues on v1.3.0-eksbuild.1 and kubernetes 1.30 These are the env variables that the webhook injects into the pod: WS_CONTAINER_CREDENTIALS_FULL_URI=http://169.254.170.23/v1/credentials AWS_DEFAULT_REGION=eu-central-1 AWS_REGION=eu-central-1 AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE=/var/run/secrets/pods.eks.amazonaws.com/serviceaccount/eks-pod-identity-token AWS_STS_REGIONAL_ENDPOINTS=regional
AWS CLI version inside the pod: aws-cli/2.17.35 Python/3.11.9 Linux/6.1.102-108.177.amzn2023.x86_64 docker/x86_64.amzn.2
Same setup as vipan, just trying to get a pod to list s3 buckets within an account. If I remove the pod identity association and instead append the permissions to the node's role, I can successfully list all s3 buckets. So this is definitely an Amazon EKS Pod Identity issue
Additionally, here are the error logs from the agent: {"client-addr":"10.19.0.11:43368","cluster-name":"eks-ilz","level":"info","msg":"Calling EKS Auth to fetch credentials","time":"2024-08-22T06:11:47Z"} {"client-addr":"10.19.0.11:43368","cluster-name":"eks-ilz","level":"error","msg":"Error fetching credentials: error getting credentials to cache: unable to fetch credentials from EKS Auth: operation error EKS Auth: AssumeRoleForPodIdentity, request canceled, context canceled","operation":"AssumeRoleForPodIdentity","service":"EKS Auth","time":"2024-08-22T06:11:49Z"}
AWS Support just fixed this issue for me. In case you have a similar setup. One thing I did not mention in this thread is that our EKS cluster is only exposed to a Corporate Network and has no direct access to the internet. To go around this, our IT team deployed a proxy inside the Corporate Network which we have to go through everytime we want to make requests to the internet. We have set our nodes to use this proxy via aws launch template user data variable and we append it to /etc/environment. However, the eks pod identity agent daemonset was not aware of the proxy.
So in order to solve the issue, set the proxy environment variable for the daemonset:
kubectl set env ds/eks-pod-identity-agent https_proxy="your://proxy.url" -n kube-system
What happened: We are using EKS pod identity agent to grant RDS access for a pod. For testing purpose, we have attached S3 full access to the IAM role and when we run aws s3 ls command from pod it says :
Further on investigation, we found the below error:
The pod is deployed in namespace vipan and the service account is pod-identity. When we describe the sa, it is empty.
We followed this doc https://docs.aws.amazon.com/eks/latest/userguide/pod-id-agent-setup.html to setup the EKS pod identity and ideally, AWS or EKS should add the details in the Service account. What you expected to happen: The pod should be able to access the S3. How to reproduce it (as minimally and precisely as possible): Setup the EKS pod identity agent following the doc https://docs.aws.amazon.com/eks/latest/userguide/pod-id-agent-setup.html Installed the add-on + Created an VPC endpoint for eks-auth + Created a role, namespace, service account + Attached that SA to a pod Anything else we need to know?: NA Environment:
aws eks describe-cluster --name <name> --query cluster.platformVersion
): eks.5aws eks describe-cluster --name <name> --query cluster.version
): 1.29