Open Zhenye-Na opened 1 year ago
Trying to bump this issue up again since no replies received after a month
Also seeing this. We will see a window where sometimes a generated token gets Unauthorized for 3-5 minutes before it expires. The structure of the go-lang client auth plugins keeps us from detecting the problem and regenerating the token, so we have short-lived outages. Seems to happen after about an hour.
Our client is running in an EKS cluster using IRSA and communicating with a different EKS cluster.
I think I understand what is going on in our case. We are using the token.Generator without passing in a Session, in a k8s Pod using IRSA. The IAM Role's MaxSessionDuration is 1 hour. So what happens is:
So the problem is that if you use a session that is about to expire to presign, you get less than the 15 minutes assumed in the code. The correct expiration would be min(session.Expiry, time.Now() + 15m)
One workaround is to expire the session early:
s, err := session.NewSessionWithOptions(session.Options{
SharedConfigState: session.SharedConfigEnable,
CredentialsProviderOptions: &session.CredentialsProviderOptions{
WebIdentityRoleProviderOptions: func(provider *stscreds.WebIdentityRoleProvider) {
// When the session expires, pre-signed tokens seem to become invalid within 3 minutes,
// even if they were created <15 minutes ago. Expiring the session 12.5 minutes early
// should keep the token from falling into this window.
provider.ExpiryWindow = 12*time.Minute + 30*time.Second
},
},
})
tok, err := gen.GetWithOptions(&token.GetTokenOptions{
Session: s,
// set ClusterID, etc.
})
(last comment, sorry for hijacking this ticket)
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
We're also experiencing this issue using kubectl --kubeconfig
with these settings:
{
"name": "example-user",
"user": {
"exec": {
"command": "aws",
"args": [
"--profile",
"my-profile",
"--region",
"us-west-2",
"eks",
"get-token",
"--cluster-name",
"my-cluster"
],
"env": [],
"apiVersion": "client.authentication.k8s.io/v1beta1",
"provideClusterInfo": false
}
}
Should we enable any additional config options?
Hey folks, I'm running into this issue as well, wondering if there's an update?
@iamnoah I also tried your patch, but still seeing Unauthorized after about a few minutes after the pod starts.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
We have tried to implement similar methods defined in section https://github.com/kubernetes-sigs/aws-iam-authenticator#api-authorization-from-outside-a-cluster in Golang
It works fine.
However, we are having intermittent issue that the Kubernetes Client created with this token is throwing
Unauthorized
error when performing kubernetes operation, for example, equivalent tokubectl get nodes
We are using https://kubernetes.io/docs/reference/config-api/client-authentication.v1beta1/#client-authentication-k8s-io-v1beta1-ExecCredential instead of using similar
headers = {'Authorization': 'Bearer ' + get_bearer_token('my_cluster', 'us-east-1')}
as the tutorial.I am wondering what could be the potential reason for this
Unauthorized
error ? The API successfully ran for almost 20 minutes and then suddenly this error is thrown back.I am thinking:
bearerToken
expired exactly at the timestamp defined inExpirationTimestamp
or after some magical time delta ? Currently we configured theExpirationTimestamp
to be 1 hour after the token is generated. Does this conflict with the sts presign 60 seconds ?Something to note though is that the IAM Authenticator explicitly omits base64 padding to avoid any = characters thus guaranteeing a string safe to use in URLs.
is mentioned and the python code exampleis explicitly replace
empty string, which is absent from our golang methods, how so far everything related to kubernetes operation is working fine.
=
withAlso trying to get some feedback on this if there is anything else that I am missing.