Open Jalmeida1994 opened 2 years ago
One more thing:
Removing the user
from the context
, manually getting the token by running:
$ kubectl oidc-login get-token \
--oidc-issuer-url=<<https://cluster_url/oidc>> \
--oidc-client-id=<<kubelogin.url>> \
--oidc-client-secret=<<secret>> \
--oidc-extra-scope=email
{"kind":"ExecCredential","apiVersion":"client.authentication.k8s.io/v1beta1","spec":{"interactive":false},"status":{"expirationTimestamp":"2022-01-22T17:04:38Z","token":"<<TOKEN>>"}}
And using the returned token in the kubectl
command, works as intended:
$ kubectl --token=<<TOKEN>> get ns
I0122 10:36:29.203061 63833 versioner.go:58] the server has asked for the client to provide credentials
NAME STATUS AGE
<<--outputs as intended-->>
Hey everyone, so the issue keeps getting weirder.
I had installed RD via Homebrew, which as you know forces us to delete binaries (eg. kubectl
) in order for the cask to manage their lifecycle including their symbolic links.
So I uninstalled RD, installed the binaries (kubectl
being the one in question), installed the kubelogin
plugin, and then installed RD via the .dmg
from the Releases page . And it worked again.
But now I'm even more stumped: Is the kubectl
binary that is ran in the Kubeconfig file not the same as the one installed in our machine? Because I installed the plugin, I could manually get the token, but whenever I tried to use it in the Kubeconfig it failed. And now that I'm using my own kubectl
it works.
I'm sorry if this a stupid issue ,but I don't really know what is happening ahah.
Well it's fixed for now, by installing my own kubectl
and installing RD from the releases page.
I'll keep this open if anyone wants to try to help me understand this.
I'm running into exactly the same issue. What I've found is that if you're patient and wait for long enough (in my case it's around 10 minutes) you eventually get a response.
I had the same issue when I was trying to install kubelogin
using brew and I fixed it by installing the plugin using krew
How do you edit the kube api? I've been looking but the answers I come across seem to suggest that I need to edit a config before the server starts.
I had the same issue when I was trying to install
kubelogin
using brew and I fixed it by installing the plugin using krew
Unfortunately that didn't fix it for me.
Just spent a lot of time debugging this issue on the kubelogin
side until I finally understood that kubectl
is being used from RD rather than from my Homebrew installation. Removing
export PATH="/Users/yafanasiev/.rd/bin:$PATH"
from my .zshrc
and respectfully loading kubectl
binary from Homebrew fixes the issue. Is there anything specific about kubectl
binary RD provides? I would be happy to assist in any way.
We are hitting this as well, it seems something about the kubectl with rancher-desktop is breaking when using kubelogin
to auto-open a browser to an auth flow. After 3min it finishes however.
I1216 17:38:38.482258 56133 versioner.go:58] Get "https://${internal-cluster-url}/version?timeout=5s": getting credentials: exec: fork/exec /Users/joshuabranham/.rd/bin/kubectl: resource temporarily unavailable
Then eventually we see I1216 17:40:08.836792 54793 versioner.go:56] Remote kubernetes server unreachable
repeatedly.
Also just had this issue. It is specific to using kuberlr. Rancher aliases kubectl to kuberlr and then sets itself as the first entry in the path. This breaks any OIDC based clusters. If you do a which kubectl
and remove the single alias that rancher-desktop puts in place then everything will work as expected aside from switching kubectl versions. The issue is in the kuberlr they are aliasing, but I also don't think they should step on any predefined kubectl configs like that adding a new one to the beginning of the path.
Until this problem is fixed, I have just changed the entrypoint for the exec login from instead of being a kubectl
binary, it uses the krew plugin directly. Posting it here in case anyone else wants to use it as well.
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: ../.krew/bin/kubectl-oidc_login
args:
- get-token
- --oidc-issuer-url=...
rm ~/.rd/bin/kubectl
makes it possible to use kubectl with rancher desktop. It would be wonderful if someone could fix this issue.
still have this problem, could i help the rancher team with providing some sort of extra info? What do you need?
@gvlekke It's not the rancher issue, but kuberlr https://github.com/flavio/kuberlr/issues/37 as mentioned above
@gvlekke It's not the rancher issue, but kuberlr flavio/kuberlr#37 as mentioned above
Ah thanks didn't saw that part. So rancher uses the kuberlr, that explains why if i remove kubectl r rm ~/.rd/bin/kubectl
and download kubectl trough brew I can access the remote k3s.
Rancher Desktop Version
0.7.1
Rancher Desktop K8s Version
1.23.1, 1.20.14
What operating system are you using?
macOS
Operating System / Build Version
macOS Big Sur v11.6.2
What CPU architecture are you using?
arm64 (Apple Silicon)
Linux only: what package format did you use to install Rancher Desktop?
No response
Windows User Only
No response
Actual Behavior
I'm using a
context
in a kube config file with theuser
as follows:Every time I try to use the
kubectl
command it prints out the error, over and over again:Until it prints out:
Finally it prints out endlessly:
Unfortunately the Rancher Desktop logs don't output anything, even in debug mode.
Steps to Reproduce
contexts
in the Kubernetes config file to use theoidc-login
command, like this:kubectl
, eg:kubectl get ns
;Result
Expected Behavior
Connecting to the cluster.
Additional Information
The cluster I'm trying to connect is also behind a VPN. I don't know if it's in line with #722 but every issue about company VPNs is on Windows platform, so I decided to submit here my own. I also tried it in a Mac with intel processor and the result is the same. Thanks for the support.