Open nixon89 opened 2 years ago
Hello, Nikolay!
Some thoughts:
This is a problem on the side of the oidc.example.ru
. Provided response {"error":"invalid_grant"}
is what oid.example.com responded to Dex. Could you check its logs to find the actual reason for a problem, because the invalid_grant
error has a lot of meanings in the OIDC world?
Dex rotates refresh token on every refresh request in terms of security, thus no one can steal the refresh token and use it forever. It seems that kubectl has problems persisting credentials back to the kubeconfig file sometimes.
You can try to disable refresh tokens rotation and see if it helps:
expiry:
idTokens: 1440m
refreshTokens:
disableRotation: true
absoluteLifetime: 876000h
validIfNotUsedFor: 4392h
Hello, @nabokihms
We lived 2 months with the disableRotation: true
setting enabled in dex
Subjectively, there are fewer problems with non-working configs, but they have not completely disappeared.
I did the following experiment:
refreshtokens.dex.coreos.com
in the cluster. The connectorData
field was filled in correctly - the testuser's refresh token from found the provider's oidc in base64Internal Server Error Response: {"error":"invalid_request"}
7.2 dex - level=error msg="failed to refresh identity: oidc: failed to get refresh token: oauth2: cannot fetch token: 400 Bad Request\nResponse: {\"error\":\"invalid_grant\"}"
refreshtokens.dex.coreos.com
- and for some reason there was not even a connectorData
field there for test user.Is this the correct work of dex? I mean even with disableRotation:true
in dex, the refresh token from the oidc provider was removed from the connectorData
field)
I expected that dex itself received a new refresh token from an external oidc provider and replace old token to new token in CRD refreshtokens.dex.coreos.com
in connectorData
field for test user
It seems that there is a problem with the concurrent call of the Refresh
method of the connector. I will provide a detailed description of the problem and possible solutions.
Hello everyone, I am also facing similar kind of issue with different error. I am generating id-token and refresh-token using dex for kube-oidc-proxy check to Kubernetes GKE server. On generating new tokens, everything works fine and I am able to perform kubectl commands which first get authenticated at kube-oidc-proxy server and then get forwarded to kubernetes API server on GKE.
But once id-token gets expired, refresh-token should work and generate new id-token but running kubectl commands with expired token gives
Unable to connect to the server: Get "https://dex.xyz:32000/.well-known/openid-configuration": x509: certificate signed by unknown authority
which is strange to me as new tokens always work and no error comes regarding unknown authority, but expired token give this where refresh-token should have worked... I have already entered root-ca config in oidc-proxy configurations and in dex-authenticator(in place of gangway) configurations
I am using google connector and kubernetes storage.
This is my configuration file: `issuer: https://dex.xyz:32000 storage: type: kubernetes config: inCluster: true web: https: 0.0.0.0:5556 tlsCert: /etc/dex/tls.crt tlsKey: /etc/dex/tls.key connectors:
staticClients:
It seems that there is a problem with the concurrent call of the
Refresh
method of the connector. I will provide a detailed description of the problem and possible solutions.
@nabokihms can you share your solution please?
Hello, @ansh-lehri. I think your problem differs from the one @nixon89 has. Kubectl fails to request Dex because of the TLS certificate.
The first step is to check auth flags in your kubeconfig
users:
- name: username
user:
auth-provider:
config:
...
idp-issuer-url: https://dex.example.com/
idp-certificate-authority-data: ...
Which certificate will you get if you try to access the Dex URL? Is it returns a valid TLS certificate? Is the idp-certificate-authority
/idp-certificate-authority-data
provided/required?
@nabokihms it looks like we are facing the same problem as described by @nixon89, it is reproduced from time to time. Do you have any news about possible solutions?
@sgremyachikh I tried to describe the problem in detail in the linked issue #2547. I hope when we come up with a solution for calling refresh multiple times, your situation will also be fixed.
Preflight Checklist
Version
2.30.2
Storage Type
Kubernetes
Installation Type
Official Helm chart
Expected Behavior
When id-token expires - automatically refresh via kubectl When refresh-token expires - automatically refresh via kubectl
Actual Behavior
Case 1:
When some users run:
kubectl get pods
Kubectl return error (500 internal server error):
In Dex logs error (400 bad request):
level=error msg="failed to refresh identity: oidc: failed to get refresh token: oauth2: cannot fetch token: 400 Bad Request\nResponse: {\"error\":\"invalid_grant\"}"
Case 2:
When some users run:
kubectl get pods -v 7
Kubectl return error (400):
In Dex logs error:
Response: {"error":"invalid_request","error_description":"Refresh token is invalid or has already been claimed by another client."}
Steps To Reproduce
I don't know how reproduce this cases. Users only use kubectl/k9s. kubectl/k9s via oidc may brake after 1,2,4 days, but may brakes after 45 days.
k8s APIServer options:
our setup: k8s-apiserver + dex (in oidc mode) + oidc.example.com (our custom oidc server) + gangway (for present kubeconfigs to users) + oauth2-proxy/k8s-dashboard. We hasn't problem in tandem: dex + oauth2-proxy/k8s-dashboard. Problems only with personal kubeconfigs, that uses in kubectl/k9s.
Additional Information
Kubernetes v1.19.7 Dex 2.30.2 heptio/gangway 3.3.0 kubectl 1.19..1.22
Configuration
Logs
No response