Open KevinDW-Fluxys opened 6 months ago
Unfortunately, that sounds like an issue with token size. Most web servers support summary request header sizes up to 4-8 kB. We do not have any logic to detect token length. We could add that, but it would still not solve your issue.
Does kubectl --token ...
work with such a big token?
Dear @floreks, I'm a college of Kevin.
I'm able to use the token in kubectl --token. so that does not seem to be the problem. If I check the token of the "not working" environment in https://www.javainuse.com/bytesize then it sais it's 4.1 KB If I check the token of the "working" environment in https://www.javainuse.com/bytesize then it sais 2.08 KB.
Could maybe be 4 the limit or something?
Thanks for checking
Are there test commands we can try to run in the pod to see if the header is added correctly in the response? Can we enable extra logging or something?
Thanks Toon Tijtgat
I think that kong by default supports summary header sizes up to 8 kB. They are using nginx underneath. Our UI -> API most probably has a 4 kB limit currently. I'd have to debug it on our side to make sure where it gets terminated. If you can configure token content and get rid of unused information it should make it work for now. I know that some providers include lots of unnecessary information that are not required by Kubernetes API server.
Hi @floreks
We are using Azure kubelogin, which does not allow configuring the token content as far as I know.
I have taken a quick glance at the code with my limited go knowledge.
If it is indeed the UI -> API, could it be that we need to specify a MaxHeaderBytes
in this function?
https://github.com/kubernetes/dashboard/blob/1d4897cd8d1c4af8747906c87f11acbb598814b9/modules/api/main.go#L99
AFAIR azure allows configuring JWT token content, groups, audience, etc. With azure it is usually an issue of configuring too many groups and that all of them are embedded into the token, not only actually used ones.
Regarding code changes, max header size would need to be checked and increased for both API and Auth modules. If that's the only issue.
AFAIR azure allows configuring JWT token content, groups, audience, etc. With azure it is usually an issue of configuring too many groups and that all of them are embedded into the token, not only actually used ones.
I can indeed see that there are many groups included in the token, but unfortunately i dont find a way to configure the response. We are using kubelogin which does not have the option to do so, but if you know of another way that leverages azure authentication to generate the token, it might help us to (temporarily) overcome this issue.
Regarding code changes, max header size would need to be checked and increased for both API and Auth modules. If that's the only issue.
Given the behavior it does look like that would be the issue, but the only way to be sure is to test it of course. What would be the best course of action to get this tested?
@floreks Thanks for finding the potential issue. Would it be possible to fix the issue with a patch?
Thanks for checking
@floreks Did you get the chance to look at this? Or what can we do to make this move forward?
It's a bit problematic to test locally, unfortunately. From what I have checked header is not trimmed on our side (auth container). It was able to receive headers bigger than 4kB. Configuring API server with a custom OIDC exchange to allow testing custom tokens is time-consuming. I didn't get a chance to do a full end-to-end test to figure out the root cause yet. On our side header size does not seem to be the problem.
I was facing this issue and managed to login to kong-proxy with a regular admin-user token after i recreated it.
I was facing this issue and managed to login to kong-proxy with a regular admin-user token after i recreated it.
This would reduce the length of the token, and as such avoid the issue. Unfortunately when you have no control over the token length (such as with azure generated tokens) this does not help.
My use case is: nginx
proxy_pass to kong
from default helm installation.
When I tried passing token using web input, I was getting "Invalid Token".
Added proxy_set_header Authorization "Bearer xxx"
into nginx config and it worked.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
This is still a show stopper for many install targets.
agreed, we are still not able to update because of this issue.
What happened?
When trying to login using a Bearer Token the page is not responding. We can find this in the logs of the auth-pod:
in the kong-rpoxy we find this:
and in the devtools i can see the response 500 from /api/v1/me is this:
The token is correct because it works for directly authenticating. Also, when i just type some random characters, the UI returns a clear error and in devtools i can see it is returned from api/v1/login instead
What did you expect to happen?
The page responds and you are logged in (or you get an error message about invalid credentials)
How can we reproduce it (as minimally and precisely as possible)?
It is unclear, we have 2 environments where it works, and 2 others where it doesn't work. The environments are programatically deployed, and we can see no difference in configuration between the clusters. The only difference we find is that the bearer token is much longer on the environment where it doesn't work so our best guess is that it has to do with this.
Anything else we need to know?
We are now running behind an Istio Virtualservice that redirects to Kong Proxy, but that should not be related, as we tried running istio directly without Kong. We also get the same result when using a portforward. (on the kong proxy, port forward does not seem to work since the pods have been split up)
What browsers are you seeing the problem on?
Chrome, Microsoft Edge, Firefox
Kubernetes Dashboard version
7.1.1 (Helm)
Kubernetes version
1.28.3
Dev environment
No response