Closed kostyay closed 2 years ago
You would look at the $KUBECONFIG env variable. That has to be set to your kubernetes config (home/user/.kube/config) So if it gives error right away then the issue is probably there.
If you use it for a while and then get an error, that would probably be a bug.
You would debug it the same way, print out the $KUBECONFIG env, it should look something like this:
/home/ramilito/.kube/kubesess/cache/CLUSTERNAME:/home/ramilito/.kube/config
Take a look at that added file, the error would be in it
> echo $KUBECONFIG
/Users/kostyay/.kube/kubesess/cache/XXXX:/Users/kostyay/.kube/config
> cat -p /Users/kostyay/.kube/kubesess/cache/XXX
kind: Config
apiVersion: v1
current-context: XXXX
kind: Config
apiVersion: v1
current-context: XXXX
contexts:
- context:
namespace: dkron
cluster: XXXX
user: XXXX
name: XXXX
Does the config look fine? seems like it contains duplicate lines.
I tried removing the duplicate lines and it didnt solve the issue
I may add that the cluster names are aliased to a longer name.. perhaps it is related?
that's a good addition...yeah I think that could be it, I'll try fixing it tonight after work
This one was a pain, I should have seen it coming though. Just as you said, the cause was the alias. Was using the name only to build the kubeconfig to be merged because it was easier
.args(["config", "get-contexts", "-o", "name"])
but that only supports name, tried a different variant using
kubectl config view -o jsonpath='{$.contexts[*]}'
But it was super slow (went down to only 3 times as fast as kubectx instead of 80 times) so went around kubectl completely.
This should work but It's 2 am here so I might have missed something :( I'll be bold and close this issue as fixed but if you find anything wrong just reopen it!
Thanks for the quick fix, I will report if any issues
Describe the bug I seem to be getting this error trying to switch to namespaces/context once in a while. First time I deleted the cache and it fixed the issue, now I'm getting it on one of the environments I work on. How can I debug the issue?
I dont even have any clusters on localhost:8080 so not sure what it wants