Open zWaR opened 3 years ago
@zWaR Thank you for you report! I like the idea but think doing this will prevent you from switching contexts while staying in k9s. Might be worth looking at direnv and setup your kubeconfig that way based on which clusters you need to target for a specific task??
Following this issue with great interest. So allow me to suggest to merge Kubeconfig files. E.g. see tip 2. Or yes direnv
is a great suggesting. However, it would require me to exit K9s > cd to another folder > start K9s again - would be nice if K9s integrated something like the Kubeswitch tool. For changing cluster context.
Thank you and a great day to you all.
@LarsBingBong I really like your idea of merging the Kubeconfig files and then adding a feature to switch between them from within k9s. This would be a step further from the existing PR https://github.com/derailed/k9s/pull/1003, which still requires a restart in order to choose between kubeconfigs. Let me look into this and see how bad it'd be to implement it.
@zWaR thank you and that sounds great. Looking forward to see your implementation do wonders.
👍🏿
I have solved this problem locally using the following option
export KUBECONFIG=`ls -p /Users/my-name/.kube/config* | tr '\n' ':'` #Note the * after the config
# The above path of all kube config files need not be in .kube folder, can be anywhere
# The export command has been added to my ~/.zshrc file, so all terminals get that from the beginning.
Named all the kubeconfig
yaml files which I download from multiple clusters always as config-abc.yaml
Now with K9S, when I write :contexts
- it shows list of all the clusters I want to connect to. I was even lazy to type :contexts
, so i used the k9s alias concept to create :qq
which is easy to switch between context.
This while concept is based on how kubectl
does the merging or understanding of $KUBECONFIG variable
export KUBECONFIG=`ls -p /Users/my-name/.kube/config* | tr '\n' ':'`
hey that's a really cool approach. I modified it a little bit, because my kube config files don't all start with the config
-prefix:
export KUBECONFIG=$(find ~/.kube -maxdepth 1 -type f | tr '\n' ':')
btw this lead's to interesting results if not all of the clusters have different names in the configs :)
btw this lead's to interesting results if not all of the clusters have different names in the configs :)
Yup - I don't like it also. So when I hop on to a new k8s-cluster
first thing is I download and clean up the config file. Almost all admins are lazy and setup with default
as the cluster name.
May be another utility in order.
export KUBECONFIG=$(find ~/.kube -maxdepth 1 -type f | tr '\n' ':')
So now that we have this can we get that implemented in a OS-independent way inside K9s? That's all that's being asked as far as I can tell. Instead of providing a path to a file if the user provides a path to a folder with a glob ending, K9s would scan for all files and, if they are valid kubeconfigs, concat them just the way as kubectl allows you to do with the method showcased above.
The mentioned "interesting" results if more than one cluster has the same name could be simply fixed by simple "last one wins" or "first one wins" logic, ideally showing the conflicting clusters/contexts in the list but disable them ("grayed out" + with a warning explaining the issue when highlighted) so they can not be selected easily to avoid "working on the wrong cluster". At least that would be my preference from a UX standpoint.
I have solved this problem locally using the following option
export KUBECONFIG=`ls -p /Users/my-name/.kube/config* | tr '\n' ':'` #Note the * after the config
My twist to that is directory-driven configuration using https://direnv.net My Git repo with AKS-based clusters has this structure:
clusters/dev
clusters/stg
clusters/prd
and there is .envrc
file in each of the directories. The content of the .envrc
looks like this:
# Kubernetes
export KUBECONFIG=${PWD}/.kubeconfig
test -f "${KUBECONFIG}" && rm "${KUBECONFIG}"
az aks get-credentials --admin --resource-group ${AKS_CLUSTER_GROUP} --name ${AKS_CLUSTER} --file ${KUBECONFIG}
kubectl config set current-context "aks-${AKS_OWNER}-${AKS_LOCATION}-${AKS_ENVIRONMENT}-aks-admin"
Then, I do e.g.
cd clusters/prd
k9s
and I get the desired kubeconfig generated and used by the K9s. Additionally, my command line prompt (using Starship or powerline-go) displays the current Kubernetes context.
This may seem a primitive solution, but I like the cd
driven switching of the clusters :)
Is your feature request related to a problem? Please describe. I am using k9s regularly for work and typically work with different clusters. At this moment I'm dealing with 11 clusters and the count will probably just increase in the future. k9s does not provide any feature that would make it a bit easier to work with multiple config files. What I do now is use the provided
--kubeconfig
flag of course. Using tab completion for the filenames passed to the flag is what I use now, but it's tedious.Describe the solution you'd like It would be awesome if k9s would list all my k8s configs and let me pick which one I want to use for the current execution. The selection does not need to be fancy, something simple would suffice and already even improve the user experience.
Describe alternatives you've considered I've been using
--kubeconfig
+ tab completion on the filenames, but that's tedious and cumbersome.Additional context This section is intentionally left blank, since I do not have any additional context to add to this request. It sounds simple enough to me, but please let me know if you'd like additional information or have questions about it. I'd be happy to participate in a further discussion.