kube-hetzner / terraform-hcloud-kube-hetzner

Optimized and Maintenance-free Kubernetes on Hetzner Cloud in one command!
MIT License
2.18k stars 343 forks source link

Outputs for Kubernetes Terraform provider #1165

Closed liamwh closed 3 months ago

liamwh commented 8 months ago

Description

I would love to be able to use the kubernetes terraform provider using outputs from kube hetzner, something akin to the following for DO:

provider "kubernetes" {
  host                   = digitalocean_kubernetes_cluster.veloxide-k8s-cluster.endpoint
  token                  = digitalocean_kubernetes_cluster.veloxide-k8s-cluster.kube_config[0].token
  cluster_ca_certificate = base64decode(digitalocean_kubernetes_cluster.veloxide-k8s-cluster.kube_config[0].cluster_ca_certificate)
}

Which comes from:

resource "digitalocean_kubernetes_cluster" "veloxide-k8s-cluster" {
  name                             = "k8s-1-28-2-do-0-ams3-1703291481360"
  region                           = "ams3"
  auto_upgrade                     = true
  version                          = data.digitalocean_kubernetes_versions.k8s.latest_version

  node_pool {
    name       = "veloxidenodepool"
    size       = "s-2vcpu-4gb"
    auto_scale = true
    min_nodes  = 1
    max_nodes  = 5
  }
}
mysticaltech commented 8 months ago

@liamwh That would be really nice indeed. @kube-hetzner/core FYI.

valkenburg-prevue-ch commented 8 months ago

You can already do that:


provider "kubernetes" {
  host                   = module.kube_hetzner.kubeconfig_data.host
  client_certificate     = module.kube_hetzner.kubeconfig_data.client_certificate
  client_key             = module.kube_hetzner.kubeconfig_data.client_key
  cluster_ca_certificate = module.kube_hetzner.kubeconfig_data.cluster_ca_certificate
  ignore_annotations = [
    ".*cattle\\.io.*",
  ]
  ignore_labels = [
    ".*cattle\\.io.*",
  ]
}

And I did that for a while, before concluding that is was an unstable situation. Occasionally, the provider would want to initialize before the cluster was built, so I would be stuck with a terraform state that would refuse to build the cluster because not all providers were fully configured.

I ended up separating my cluster configuration in three "independent" terraform folders, with each their own state, and I run them in a sequence.

  1. (cluster) kube.tf
  2. (core infra on cluster) with the kubeconfig.yaml from 1., setup longhorn, hashicorp vault, service mesh, etc.
  3. (applications) with the configured terraform vault provider (only possible after finalizing step 2), setup all applications with need for vault, etc.

So you can do it, and I concluded it was not the best solution. Your milage might vary...

liamwh commented 8 months ago

Amazing, will give this a go and report back, thank you very much!

liamwh commented 7 months ago

I am getting this error often, any idea what I can do about it?

module.kube-hetzner.data.remote_file.kubeconfig: Refreshing...
module.kube-hetzner.null_resource.kustomization: Refreshing state... [id=1680158679973432701]
module.kube-hetzner.null_resource.configure_autoscaler[0]: Refreshing state... [id=4638238659851963355]
module.kube-hetzner.null_resource.configure_floating_ip["3-0-egress"]: Refreshing state... [id=4273720719089111240]
module.kube-hetzner.data.remote_file.kubeconfig: Refresh complete after 4s [id=88.99.36.56:22:/etc/rancher/k3s/k3s.yaml]
module.kube-hetzner.local_sensitive_file.kubeconfig[0]: Refreshing state... [id=b6cb6f78b4f1a23598db3e2f8de60b983224b5c3]
╷
│ Error: Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
│ 
│ 
╵
Operation failed: failed running terraform plan (exit 1)
mysticaltech commented 7 months ago

@liamwh Did you sort this out? Either way, could you share your the code you use to do that please (without sensitive values of course). It would be greatly appreciated.

andi0b commented 7 months ago

It would be great to have access to many more things via outputs. Also the node pools. While playing around to optimize my longhorn volumes issue (#1195) I added this to the outputs for example. So I can add volumes to nodes in the custom way I would like to do it.

# added at the end of /output.tf

output "control_planes" {
  value = module.control_planes
  description = "All control plane items"
}

output "agents" {
  value = module.agents
  description = "All agent items"
}

And yes, I understand that I can break a lot of stuff with that freedom ;)

mysticaltech commented 7 months ago

@andi0b Looking good, PR most welcome!

andi0b commented 7 months ago

@andi0b Looking good, PR most welcome!

I have to look into it again, this only exposes the values from the host tf-submodule, I think it should also expose more information, also things from the main TF module, and maybe merged with the node pool input variables.

I've seen that there is a bigger re-factor planned for the next version (more submodules), it might be better to wait for this to be completed. To not implement something now that will soon lead to a breaking change. Do you have an estimation when this refactoring will be finished?

mysticaltech commented 7 months ago

@andi0b You are absolutely right, best to wait for v3. @aleksasiriski is leading the push to v3, we do not have a time estimate yet, but it will come soon enough. We will keep this FR in mind and slip it in if we can.