xunleii / terraform-module-k3s

Terraform module to manage a k3s cluster on given machines
https://registry.terraform.io/modules/xunleii/k3s/module
MIT License
186 stars 51 forks source link

Custom k3s cluster name inside of the admin kubeconfig #144

Closed alita1991 closed 12 months ago

alita1991 commented 12 months ago

Hi,

I'm looking for a method to define the cluster name within the k3s admin kubeconfig file, particularly in situations where the kubeconfig is generated through Rancher. This becomes necessary when managing multiple distinct clusters, as the default name isn't sufficient.

At present, the admin kubeconfig contains the user and cluster name set as 'default'. One possible solution might involve substituting the 'default' string within the file with an alternative label. However, this adjustment could inadvertently affect the user value. I'm uncertain whether this could potentially cause any unforeseen issues.

Moreover, in cases where 'generate_ca_certificate = true', the cluster_domain is utilized. Nevertheless, its reliance on 'cluster.local' as a standard value could lead to a lack of uniqueness across clusters, considering it's utilized by k3s as a domain.

Thanks

xunleii commented 12 months ago

Hello,

Sorry, I didn't understand the first part:

I'm looking for a method to set the cluster name in the kubeconfig file of k3s admin, particularly in situations where the kubeconfig is generated by Rancher.

How can the kubeconfig be generated by Rancher using this module?


Apart from this point, my view is that it's an "advanced use" to modify the kubeconfig (even if it's simple) and must be done outside this module. It's entirely possible for the end user to generate a kubeconfig from the values in the kubernetes output, like the following (not tested)

module "k3s" {
  ...
}

local {
  kubeconfig = yamlencode({
    apiVersion      = "v1"
    kind            = "Config"
    current-context = "CLUSTER_NAME"
    contexts = [{
      context = {
        cluster = "CLUSTER_NAME"
        user : "CLUSTER_NAME_ADMIN"
      }
      name = "CLUSTER_NAME"
    }]
    clusters = [{
      cluster = {
        certificate-authority-data = base64encode(module.k3s.kubernetes.cluster_ca_certificate)
        server                     = module.k3s.kubernetes.api_endpoint
      }
      name = "CLUSTER_NAME"
    }]
    users = [{
      user = {
        client-certificate-data : base64encode(module.k3s.kubernetes.client_certificate)
        client-key-data : base64encode(module.k3s.kubernetes.client_key)
      }
      name : "CLUSTER_NAME_ADMIN"
    }]
  })
}

However, is needed to explain how to do this in the documentation.

alita1991 commented 12 months ago

Hi @xunleii, regarding the k3s admin kubeconfig, I was looking for a way to tell rancher to generate a local kubeconfig with the correct context (custom cluster name), but probably this is available only with k3d.

For the second problem, your solution could work, thanks for the feedback.