Closed fmunteanu closed 3 weeks ago
This is a duplicate of https://github.com/k3s-io/k3s/issues/4610
Clusters don't know their own name, there is no field within the cluster itself to store that data. If you want the kubeconfig to reflect a different name, your best option is to make that edit when collecting the admin kubeconfig off the server nodes, or (preferably) when generating unique kubeconfigs for your users with distinct users and RBAC.
Thank you @brandond for providing the clarifications. I was simply trying to understand how the default
value is assigned when the cluster is deployed and if there is a way to define it with a switch, during deployment. I still believe this could be an useful feature.
We've discussed it in the past and decided against it. The admin kubeconfig really shouldn't be used for much other that locally interacting with the server. If you're managing multiple clusters to the point where you want to give their contexts unique names in your kubeconfig, you should be using more advanced tooling for that - and not using the default admin kubeconfig and admin RBAC.
Thank you, it makes sense now.
Environmental Info: K3s Version:
v1.29.4+k3s1 (94e29e2e)
Node(s) CPU architecture, OS, and Version:
Cluster Configuration: 3 servers, 5 agents
Describe the bug: My goal is to deploy 2 K3s clusters and set a distinct name for each cluster. Current configuration on both clusters, please note the
default
name:Ideally, I want to have the clusters named
development
andproduction
.K3s generates the
/etc/rancher/k3s/k3s.yaml
file, where the cluster name is present. I looked at the documentation and there is no setting allowing me to define the cluster name, on initial cluster deployment. How is thedefault
name defined intok3s.yaml
file?I understand the end-user will configure their
/.kube/config
file the way they want, but I would like to have some sort of--cluster-name
switch allowing us to define the cluster name, at deployment level.Thank you.