kubernetes / kubectl

Issue tracker and mirror of kubectl code
Apache License 2.0
2.86k stars 921 forks source link

Feature Request: Support $HOME/.kube/config.d/* #569

Open omeid opened 5 years ago

omeid commented 5 years ago

Greetings!

kubectl already allows defining multiple clusters and users in $HOME/.kube/config however editing this file by hand or even by tools is a bit cumbersome.

If kubectl supported loading multiple config files from $HOME/.kube/config.d/ it would make dealing with different cluster configurations much easier.

For example,

  1. kubespray already generates a config file, but it is still not easy to use it if you already have a config file setup (kubernetes-sigs/kubespray/pull/1647).

  2. aws eks cli already mutates the config file, but dealing with multiple clusters or updating this information requires way more mental overhead than it should require.

I would like to hear some feedback on this and whatever a pull request would be considered.

Many Thanks!

weibeld commented 5 years ago

Can't you do the same by having multiple config files in arbitrary locations and listing them in the KUBECONFIG environment variable?

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

omeid commented 5 years ago

/remove-lifecycle stale

dwmkerr commented 5 years ago

I'd be keen on this too.; If there are others who would like it then would be happy to put some time in for a PR

omeid commented 5 years ago

@weibeld Sure, but that wouldn't be the standard way and will be hard to convince tool vendors to follow through, it is also not exactly simple or easy approach.

weibeld commented 5 years ago

Sure, a $HOME/.kube/config.d/ would probably be more convenient, I just think maybe the Kubernetes maintainers might not want to introduce it, because they created the KUBECONFIG variable for the purpose of having multiple separate kubeconfig files.

What do you mean by "not the standard way"?

omeid commented 5 years ago

I think KUBECONFIG has it's own valid use, as means of overriding the default or active switch/context.

However, if kubectl context worked with $HOME/.kube/config.d/ files, tools (terraform, eks cli, kubespray, whatnot!) would emit their config files there with an appropriate name instead of trying to mutate $HOME/.kube/config (eks cli, atm), or worst overwrite it, or possibly leave it up to the user which has a cumbersome ux (what kubespray does atm).

What I mean by standard way is, asking a tool maintainer to support kubectl context is much more convincing than asking them to support some ad-hoc workflow.

Hope that makes it clear :)

weibeld commented 5 years ago

Is kubectl-switch a specific kubectl plugin or command?

In general, I wouldn't edit a kubeconfig file by hand, but use the kubectl config sub-commands. And for common tasks like changing the current context, or changing the namespace of the current context, I would use a tool like kubectx or something like these kubectl-ctx and kubectl-ns plugins.

omeid commented 5 years ago

Is kubectl-switch a specific kubectl plugin or command?

By kubectl-switch, I meant kubectl config set-context.

dwmkerr commented 5 years ago

@weibeld I would tend to automate using kubectl config too, but for the sake of organisation I can still imagine users who would like to arrange their own config files in a more structured manner, a bit like in a sources.d/ for apt. Even with kubectl config being able to specify that the config is written to a file inside of the config.d directory might be nice, meaning that you can automate the addition of configuration, but still ensure it ends up in a well organised location.

I have a few use cases for this, one is something like:

https://github.com/dwmkerr/terraform-aws-openshift

Where I'd like to run make infra to create the infrastructure, then make openshift to create openshift setup, but ideally then run make kube-config or something to create the config file. But ideally I'd love to keep this config separate from my main 'working' config.

weibeld commented 5 years ago

@dwmkerr It could indeed be more user-friendly to replace the KUBECONFIG variable with a .kube/config.d directory. So that the effective configuration would always be the merge of the default .kube/config file and all files in .kube/config.d (if present). The KUBECONFIG variable wouldn't be needed anymore. Or does someone see a use case where this approach is inferior to having the KUBEONFIG variable?

In the end, both approaches allow to do the same, but the directory approach frees you from having to deal with environment variables (which can be accidentally set and unset easily).

Currently, in your case, you would need to have your make kube-config create a .kube/aws-openshift config file for your cluster, and then set KUBECONFIG=~/.kube/config:~/.kube/aws-openshift. To make it persistent, you would have to add this to your ~/.bashrc, and then you have to hope that KUBECONFIG never gets changed by accident. So, yeah, just dropping the file into ~/.kube/config.d would definitely be easier. If you want to do a PR, it would certainly be a useful feature.

dwmkerr commented 5 years ago

Cool, I've got time this week so will aim to do it then 😄

dwmkerr commented 5 years ago

FYI I've created the design document and raised the feature request in the appropriate location (I think!) and mentioned this issue there, so I believe this issue can be closed now as it is linked to be the new feature request.

Design Document PR

https://github.com/kubernetes/kubernetes/issues/80120

omeid commented 5 years ago

I closed this issue under the impression that you have opened a pull request, now after some looking into this again, I am not sure that is the case. Is this issue not the right way to go about discussion whatever something is up for consideration or not?

I don't understand why an issue related to kubectl needs to be opened against kubernetes or community. Can you please help me understand?

dwmkerr commented 5 years ago

Hi @omeid honest I found the guide for contributing quite confusing, I was following the instructions here:

https://github.com/kubernetes/community/blob/master/sig-cli/CONTRIBUTING.md

I followed the instructions which involved creating the design doc, opening an issue on staging etc, it's quite a complex process so far, and I'm not sure the issues are even being seen...

cblecker commented 5 years ago

@dwmkerr if you're not getting responses from sig-cli, they list ways you can escalate for attention: https://github.com/kubernetes/community/blob/master/sig-cli/CONTRIBUTING.md#general-escalation-instructions

dwmkerr commented 5 years ago

Hi @cblecker thanks for the tips! Just pinged on the group, I've tried slack with no joy, but let's see what the group says. Really appreciate the help!

bgrant0607 commented 5 years ago

There's a lot of history here: https://github.com/kubernetes/kubernetes/issues/9298 https://github.com/kubernetes/kubernetes/issues/10693 https://github.com/kubernetes/kubernetes/issues/20605 https://github.com/kubernetes/kubernetes/issues/30395 https://github.com/kubernetes/kubernetes/pull/38861 https://github.com/kubernetes/kubernetes/issues/46381

And it would probably need to be respected by API client libraries, not just kubectl.

seans3 commented 5 years ago

/sig cli /sig apimachinery /area kubectl /kind feature

seans3 commented 5 years ago

/sig api-machinery

dwmkerr commented 5 years ago

Hi @seans3 - thanks I saw your message for the SIG meeting, I'll join in the discussion! Yes I hadn't realised just quite how much history there is, wow!

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

seans3 commented 4 years ago

/remove-lifecycle stale

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

omeid commented 4 years ago

/remove-lifecycle stale

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

omeid commented 4 years ago

/remove-lifecycle stale

Still want this feature.

BenTheElder commented 4 years ago

/lifecycle frozen

eddiezane commented 4 years ago

/priority backlog

pwittrock commented 4 years ago

Is this answer sufficient: https://stackoverflow.com/questions/46184125/how-to-merge-kubectl-config-file-with-kube-config/46184649#46184649 ?

BenTheElder commented 4 years ago

@pwittrock the problem with actively mutating a single file is that we have to do the locking to avoid race conditions The locking is known to be problematic though ...

If we could support a config.d and then a seperate file for just the current-context most cluster management tools would probably manage their own kubeconfig files without fear of racing with other tools.

see also: https://github.com/kubernetes/kubernetes/pull/92513

EDIT: racing on current-context is much less problematic than racing on the credentials etc. and possibly wiping out another cluster. EDIT2: I know about KUBECONFIG env being allowed to contain a list, but this also has bad UX because then users need to ensure they set this across all their shells / environments. This is why most tools write to the default file.

pwittrock commented 4 years ago

What if we were to add a flatten command which performed the locking?

The challenges with supporting config.d is that it wouldn't be supported by plugins, controllers, etc that didn't have the code compiled in. @ahmetb was able to bring up some cases where users could end up using different contexts or clusters between plugins for instance, as the plugin would use just the kubeconfig, and kubectl would use the config.d -- e.g. the user could run kubectl get deploy to see the deployments, and then run some plugin kubectl plugin delete-all-deploy, expecting the kubectl get deploy Deployments to be deleted, but actually end up using a different resolved kubeconfig to talk to another cluster.

A possible solution to this would be to enable the feature with a flag --use-kubeconfig-d and rely on plugins to fail parsing the flag.

Another solution could be to have the KUBECONFIG env var support globs, so you could specify KUBECONFIG=$HOME/.kube/config:$HOME/.kube/config.d/*. Would need to check and make sure the existing parsing logic would fail on this.

EDIT: I see the issue about KUBECONFIG env. Users may be running different versions of kubectl across different environments as well, leading to the config.d feature being inconsistently supported. If we want consistency across tools and environments, I don't think we can add new capabilities to how things are resolved.

EDIT 2: The support issue also impacts using different versions of kubectl. I know gcloud provides a dispatcher that matches the version of kubectl to a cluster. There would likely be issues if it read the config.d, then chose to dispatch to a kubectl version that didn't support config.d.

WDYT?

BenTheElder commented 4 years ago

If we kept current-context specifically in the kubeconfig file specified by the existing rules then those tools would at worst be pointed to a nonexistent context and do nothing? (Since the context name would refer to one stored in some file they don't yet read)?

Most tools get the (possibly broken) locking by importing client-go.

A glob in KUBECONFIG env would be no less breaking that supporting a config.d? Tools would still need to be updated to read it for the problem scenario you outlined.

BenTheElder commented 4 years ago

For plugins, kubectl could pass a fully resolved KUBECONFIG list value / env based on the files discovered when calling them.

Ditto for the dispatcher.

pwittrock commented 4 years ago

This might be easier on VC

BenTheElder commented 4 years ago

Forgot to post back here: We spoke over VC and hashed it out, I intend to have a short doc ready to discuss at the next SIG CLI meeting.

omeid commented 4 years ago

@BenTheElder Can you link me to the CLI meeting please?

eddiezane commented 4 years ago

@omeid https://github.com/kubernetes/community/tree/master/sig-cli#meetings

brianpursley commented 4 years ago

One challenge with multiple configs is that the config file currently stores two types of information:

  1. The definition of clusters, users, and contexts
  2. Which context is your "current context" (what gets set when you run kubectl config use-context)

If there are multiple configs, how will kubectl interpret the current-context that is stored within multiple configs. Or if you run kubectl config use-context, where will kubectl store the current context that you switch to?

For example, consider a config file that looks like this:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ...
    server: ...
  name: cluster1
- cluster:
    certificate-authority-data: ...
    server: ...
  name: cluster2
contexts:
- context:
    cluster: cluster1
    user: user1
  name: context1
current-context: context1
kind: Config
preferences: {}
users:
- name: user1
  user:
    client-certificate-data: ...
    client-key-data: ...
    token: ...
- name: user2
  user:
    client-certificate-data: ...
    client-key-data: ...
    token: ...

To me, it seems like current-context and preferences should not be included in this config. They should be in a separate config file meant just for state, and there can only be one of those. Then you can have N number of configs that just contain the rest of the stuff: clusters, users, contexts, because those are just defining things that are available to use.

Anyway, this is something I've thought about in the past and thought it made sense to mention it here. I think kubectl will need to handle how current-context will work in a multi-config scenario.

BenTheElder commented 4 years ago

Sorry this fell by the wayside, I have not gotten to any real development lately ... swamped in reviews etc. 😞

@brianpursley this was part of our thinking as well ... only the existing location should have current-context / preferences, potentially with some validation warning / error-ing if present in config.d 👍

brianpursley commented 4 years ago

@BenTheElder sorry I should have read back before I commented. It looks like the current context issue has already been brought up. This issue was mentioned today in the sig cli meeting and I wanted to add this concern, but it sounds like you guys already covered it. 👍

omeid commented 4 years ago

To me, it seems like current-context and preferences should not be included in this config. They should be in a separate config file meant just for state, and there can only be one of those.

Also ideally, the other, non-state, config files should be restricted to 1 cluster per file to avoid any confusion and conflict in the future.

morlay commented 3 years ago

I use this script to load all kube config 🤣

export KUBECONFIG=$(echo $(ls ~/.kube/config.d/*) | sed 's/ /:/g')
tedtramonte commented 3 years ago

Any news from the various SIG meetings to share? This would be a killer improvement I've been hoping to see since 2019.

eddiezane commented 2 years ago

We have discussed this in a few different shapes. This comment sums up the issue with a change like this.

https://github.com/kubernetes/kubectl/issues/1154#issuecomment-1020422589

Thanks for the continued thoughts! I've spoken with a few others about this since.

For some context, client-go is where loading kubeconfigs and other env vars comes from. Most Kubernetes tooling (built in Go) uses client-go in some way.

https://github.com/kubernetes/kubernetes/blob/5426da8f69c1d5fa99814526c1878aeb99b2456e/staging/src/k8s.io/client-go/tools/clientcmd/loader.go#L42

https://github.com/kubernetes/kubernetes/blob/5426da8f69c1d5fa99814526c1878aeb99b2456e/staging/src/k8s.io/client-go/tools/clientcmd/client_config.go#L584

Implementing this change solely in kubectl would introduce inconsistency to the ecosystem. A change like this would need to be added to client-go so other Kubernetes tooling could pick it up as well. And this is where we've run into issues in the past when trying to implement things like the https://github.com/kubernetes/kubectl/issues/1123#issuecomment-1005958599 for holding multiple configs. Unless other tools pull in the client-go changes and ship a new version, and users update (the hard part here) there will be functionality difference between which cluster each tool is operating on.

Due to the impact on the ecosystem this would need to land as a KEP.

So this isn't a "no" but we haven't found a good way to make these types of changes.