Closed awm-achi closed 2 years ago
Hmm I suspect you want to use the method that changes the global state of your kubeconfig and not just the session?
Could you try with these aliases instead?
alias kubectx='kubesess default-context'
alias kubens='kubesess default-namespace'
Those should be in place replacement for kubectx and kubens
Well I'd ideally like to retain separation between terminal sessions, I'm not looking to replicate kubectx and kubens, I just use those naming conventions for familiarity.
Oh my bad, the aliases threw me off.
So you want to have multiple sessions for the same context but different namespaces?
At the moment that isn't working :(, I have been thinking about implementing it and might do it if I come up with a good solution for it.
What exists today is that you can have one session that is using the base config and one session that uses the kubesess session, and those can be the same context but a different namespace, like this:
But every next kubesess change to the same context will overwrite the original one.
Just to double-check, the context per session is working as expected?
the context per session works, except the set namespace for kubesess sessions don't work for tools like helm and skupper, they seem to work for kubectl though
so if I do kubectl get pods, it will display the pods from the correct namespace, but if I helm upgrade, it will say release not found because it's trying to upgrade in a different namespace
I see, I'm making the wrong assumptions all the time here.
If that's the case then helm/skupper might be targeting the base file directly and not using the KUBECONFIG env? Or if they do use the env, they might not support multiple paths there (kubectl merge)?
Looked up helm and it's supposed to use the KUBECONFIG env, also seems to support merge as well, at least from helm 3. What helm version are you on?
version.BuildInfo{Version:"v3.9.4", GitCommit:"dbc6d8e20fe1d58d50e6ed30f09a04a77e4c68db", GitTreeState:"clean", GoVersion:"go1.19"}
I'll have to confirm how skupper is getting the kubeconfig address
Tried replicating your issue with helm but not managing to do so.
Running helm install jaeger --generate-name
installs to the current namespace:
Tried it on three different namespaces
Also tried the helm upgrade --install jaeger jaegertracing/jaeger
command:
using version.BuildInfo{Version:"v3.9.2", GitCommit:"1addefbfe665c350f4daf868a9adc5600cc064fd", GitTreeState:"clean", GoVersion:"go1.17.12"}
hmmm that's strange, I just tried again ❯ kubectl create ns test namespace/test created ❯ kubens · test ❯ helm install jaeger jaegertracing/jaeger NAME: jaeger LAST DEPLOYED: Sat Sep 17 12:59:18 2022 NAMESPACE: monitoring STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: ###################################################################
###################################################################
You can log into the Jaeger Query UI here:
export POD_NAME=$(kubectl get pods --namespace monitoring -l "app.kubernetes.io/instance=jaeger,app.kubernetes.io/component=query" -o jsonpath="{.items[0].metadata.name}") echo http://127.0.0.1:8080/ kubectl port-forward --namespace monitoring $POD_NAME 8080:16686
@awm-achi Not sure if that did what you wanted it to do tbh, looks like you picked namespace test
but it deployed to monitoring
which I assume is set in your base kubeconfig file.
I think this compromise would work best for you:
alias kubectx='export KUBECONFIG=$(kubesess context)'
alias kubens='export KUBECONFIG=$(kubesess default-namespace)'
That will keep the context per session and make namespace choice more reliable.
I'll close this for now but don't hesitate to reopen if you think of something else!
oh yes, sorry for the confusion, that is not what I wanted to happen, I just wanted to show what was happening on my end. I will use the default-namespace for now but I would like to be able to use session specific namespaces if possible, is there any other debugging steps I can try?
Quick update, I tried to use default namespace, but now when I switch namespaces, it will also switch the context to whatever was last selected as context using the kubectx
Yeah you are correct, there is some unintentional behaviour with default-context, let me see if I can fix it right away
@awm-achi I think I fixed that, it's building at the moment but it should make default-namespace work better for you! I'll add a to-do on making namespace per session for the same context work in the future, just need to come up with a good way of handling that.
@awm-achi Did the latest release work better?
I upgraded through brew 1.2.8 and still have the same problem. If I have 2 clusters A and B, and namespaces 1 and 2, I have two sessions open on A and B 1 and 1, the if I switch B1 to B2, that session becomes A1 instead. (After kubectx has been run on both sessions first)
Oh right, forgot, we don't change the env var on default-namespace!
alias kubectx='export KUBECONFIG=$(kubesess context)'
alias kubens='kubesess default-namespace'
Could you change your alias to that?
That worked! Helm and skupper now work as expected too, and now namespace seems to be sessions specific.
Oh I see, namespace isn't session specific if two sessions are using the same context. This is good for my use case though. Thanks!
Sweet! Yeah, it's a bit confusing, the thought is that you might want to change namespace just for a while but all new sessions should use the default one.
Describe the bug skupper and helm use namespace set in .kube/config when switching namespace with kubens To Reproduce using these aliases alias kubectx='export KUBECONFIG=$(kubesess context)' alias kubens='export KUBECONFIG=$(kubesess namespace)'
kubens
skupper cluster init
helm upgrade
Expected behavior inits skupper in namespace, upgrade a helm chart in namespace
Desktop (please complete the following information): macos m1 macbook pro iterm2 zsh