kubernetes / dashboard

General-purpose web UI for Kubernetes clusters
Apache License 2.0
14.34k stars 4.15k forks source link

Bearer Token Authentication not responding #8794

Open KevinDW-Fluxys opened 6 months ago

KevinDW-Fluxys commented 6 months ago

What happened?

When trying to login using a Bearer Token the page is not responding. We can find this in the logs of the auth-pod:

[GIN] 2024/03/14 - 08:58:40 | 200 |       39.46µs |     172.18.1.25 | GET      "/api/v1/csrftoken/login"`
[GIN] 2024/03/14 - 08:58:40 | 200 |    1.978088ms |     172.18.1.25 | POST     "/api/v1/login" 
E0314 08:58:40.077452       1 handler.go:33] "Could not get user" err="MSG_LOGIN_UNAUTHORIZED_ERROR" 
[GIN] 2024/03/14 - 08:58:40 | 500 |      94.718µs |     172.18.1.25 | GET      "/api/v1/me"

in the kong-rpoxy we find this:

172.18.2.5 - - [14/Mar/2024:08:58:40 +0000] "GET /api/v1/csrftoken/login HTTP/1.1" 200 53 "https://kubernetes.qua.***.***.net/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36 Edg/122.0.0.0" kong_request_id: "6360637dbab53d54d98c240fe426f163"
172.18.2.5 - - [14/Mar/2024:08:58:40 +0000] "POST /api/v1/login HTTP/1.1" 200 4247 "https://kubernetes.qua.***.***.net/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36 Edg/122.0.0.0" kong_request_id: "d82fd54bde203131d1bbe31660b8c454"
172.18.2.5 - - [14/Mar/2024:08:58:40 +0000] "GET /api/v1/me HTTP/1.1" 500 124 "https://kubernetes.qua.***.***.net/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36 Edg/122.0.0.0" kong_request_id: "e5431028bc7bf35ccc2573031b444e01"

and in the devtools i can see the response 500 from /api/v1/me is this:

{
    "ErrStatus": {
        "metadata": {},
        "status": "Failure",
        "message": "MSG_LOGIN_UNAUTHORIZED_ERROR",
        "reason": "Unauthorized",
        "code": 401
    }
}

The token is correct because it works for directly authenticating. Also, when i just type some random characters, the UI returns a clear error and in devtools i can see it is returned from api/v1/login instead

What did you expect to happen?

The page responds and you are logged in (or you get an error message about invalid credentials)

How can we reproduce it (as minimally and precisely as possible)?

It is unclear, we have 2 environments where it works, and 2 others where it doesn't work. The environments are programatically deployed, and we can see no difference in configuration between the clusters. The only difference we find is that the bearer token is much longer on the environment where it doesn't work so our best guess is that it has to do with this.

Anything else we need to know?

We are now running behind an Istio Virtualservice that redirects to Kong Proxy, but that should not be related, as we tried running istio directly without Kong. We also get the same result when using a portforward. (on the kong proxy, port forward does not seem to work since the pods have been split up)

What browsers are you seeing the problem on?

Chrome, Microsoft Edge, Firefox

Kubernetes Dashboard version

7.1.1 (Helm)

Kubernetes version

1.28.3

Dev environment

No response

floreks commented 6 months ago

Unfortunately, that sounds like an issue with token size. Most web servers support summary request header sizes up to 4-8 kB. We do not have any logic to detect token length. We could add that, but it would still not solve your issue.

Does kubectl --token ... work with such a big token?

ToonTijtgat2 commented 6 months ago

Dear @floreks, I'm a college of Kevin.

I'm able to use the token in kubectl --token. so that does not seem to be the problem. If I check the token of the "not working" environment in https://www.javainuse.com/bytesize then it sais it's 4.1 KB If I check the token of the "working" environment in https://www.javainuse.com/bytesize then it sais 2.08 KB.

Could maybe be 4 the limit or something?

Thanks for checking

Are there test commands we can try to run in the pod to see if the header is added correctly in the response? Can we enable extra logging or something?

Thanks Toon Tijtgat

floreks commented 6 months ago

I think that kong by default supports summary header sizes up to 8 kB. They are using nginx underneath. Our UI -> API most probably has a 4 kB limit currently. I'd have to debug it on our side to make sure where it gets terminated. If you can configure token content and get rid of unused information it should make it work for now. I know that some providers include lots of unnecessary information that are not required by Kubernetes API server.

KevinDW-Fluxys commented 6 months ago

Hi @floreks

We are using Azure kubelogin, which does not allow configuring the token content as far as I know.

I have taken a quick glance at the code with my limited go knowledge. If it is indeed the UI -> API, could it be that we need to specify a MaxHeaderBytes in this function? https://github.com/kubernetes/dashboard/blob/1d4897cd8d1c4af8747906c87f11acbb598814b9/modules/api/main.go#L99

floreks commented 6 months ago

AFAIR azure allows configuring JWT token content, groups, audience, etc. With azure it is usually an issue of configuring too many groups and that all of them are embedded into the token, not only actually used ones.

floreks commented 6 months ago

Regarding code changes, max header size would need to be checked and increased for both API and Auth modules. If that's the only issue.

KevinDW-Fluxys commented 6 months ago

AFAIR azure allows configuring JWT token content, groups, audience, etc. With azure it is usually an issue of configuring too many groups and that all of them are embedded into the token, not only actually used ones.

I can indeed see that there are many groups included in the token, but unfortunately i dont find a way to configure the response. We are using kubelogin which does not have the option to do so, but if you know of another way that leverages azure authentication to generate the token, it might help us to (temporarily) overcome this issue.

Regarding code changes, max header size would need to be checked and increased for both API and Auth modules. If that's the only issue.

Given the behavior it does look like that would be the issue, but the only way to be sure is to test it of course. What would be the best course of action to get this tested?

ToonTijtgat2 commented 6 months ago

@floreks Thanks for finding the potential issue. Would it be possible to fix the issue with a patch?

Thanks for checking

KevinDW-Fluxys commented 5 months ago

@floreks Did you get the chance to look at this? Or what can we do to make this move forward?

floreks commented 5 months ago

It's a bit problematic to test locally, unfortunately. From what I have checked header is not trimmed on our side (auth container). It was able to receive headers bigger than 4kB. Configuring API server with a custom OIDC exchange to allow testing custom tokens is time-consuming. I didn't get a chance to do a full end-to-end test to figure out the root cause yet. On our side header size does not seem to be the problem.

image

thunko commented 5 months ago

I was facing this issue and managed to login to kong-proxy with a regular admin-user token after i recreated it.

KevinDW-Fluxys commented 5 months ago

I was facing this issue and managed to login to kong-proxy with a regular admin-user token after i recreated it.

This would reduce the length of the token, and as such avoid the issue. Unfortunately when you have no control over the token length (such as with azure generated tokens) this does not help.

dverzolla commented 5 months ago

My use case is: nginx proxy_pass to kong from default helm installation. When I tried passing token using web input, I was getting "Invalid Token". Added proxy_set_header Authorization "Bearer xxx" into nginx config and it worked.

k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 1 month ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

jrabbit commented 1 month ago

/remove-lifecycle rotten

jrabbit commented 1 month ago

This is still a show stopper for many install targets.

ToonTijtgat2 commented 1 month ago

agreed, we are still not able to update because of this issue.