Closed czhczh0123 closed 1 year ago
I'm seeing the same thing. Everything seems to be pulling fine despite the errors.
The kubelet
had an in-tree ECR credential provider that was removed in 1.27. On previous Kubernetes versions, issues with the external credential provider do not result in failures, because the in-tree provider will be used as a fallback (but you'll see these warnings in the logs).
We've fixed a configuration issue with the external credential provider that will be included in the next AMI release (#1269). This will make the warnings go away.
However, before 1.27 launches; we do need to ensure that the external credential provider properly matches ECR patterns for China regions; thanks for pointing our that issue @czhczh0123 .
Hi, @cartermckinnon ,
Thank you for your response. Sorry I didn't reply in time.
But I am still confused of this message.
Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider
With in-tree ECR credential provider, kubelet will not auto-refresh the ECR token, right? Because I can only view the GetAuthorizationToken API once via CloudTrail when I launched a pod with my ECR image. After 12 hours, I did not see another GetAuthorizationToken API unless I pull another image for my repo.
Besides, I would like to ask whether kubelet will help refresh the token when we enable external credential provider in 1.27.
Thanks a lot.
Kubelet will only refresh credentials if it needs to -- i.e. you actually pull an image that matches the credential provider's patterns. If you're not pulling images, it's not going to refresh the tokens.
Environment:
aws eks describe-cluster --name <name> --query cluster.platformVersion
): eks.6aws eks describe-cluster --name <name> --query cluster.version
): 1.23uname -a
): Linux ip-10-0-0-166.cn-northwest-1.compute.internal 5.4.231-137.341.amzn2.x86_64#1 SMP Tue Feb 14 21:50:55 UTC 2023 x86_64 x86_64 x86_64 GNU/Linuxcat /etc/eks/release
on a node):What happened: tried to use CredentialProviderConfig in EKS 1.23 cluster in CN region. The result is I can pull image from my ecr repo, but found the following message in kubelet logs.
What you expected to happen: We expected kubelet to run with "ecr-credential-provider" CredentialProvider.
Environment: EKS cluster version 1.23 node image: ami-030d7615436dd9131
This is the default configuration file. Because my EKS cluster was launched in China region, so I added the following provider.
When I ran a new pod with the container image from my ECR repo, the pod could be launched without any issue. But checking the kubelet log, I could find the logs above.
Actually, if I did not change the default config file, I will not see the second message related to my ecr repo credentials.
Anything else we need to know?: I thought it might related to api version, and then I switch to EKS 1.26 cluster. The apiversion in the default config file is kubelet.config.k8s.io/v1beta1 and credentialprovider.kubelet.k8s.io/v1beta1, and when I tested with a pod with ECR image, the result is identical. node image: ami-028547407941241ec