fluxcd / terraform-provider-flux

Terraform and OpenTofu provider for bootstrapping Flux
https://registry.terraform.io/providers/fluxcd/flux/latest
Apache License 2.0
368 stars 86 forks source link

[Enhancement]: use kubeconfig in bootstrap_git resource #717

Open BobyMCbobs opened 2 months ago

BobyMCbobs commented 2 months ago

Description

As a platform builder managing multiple clusters,
I need to create, manage and destroy multiple dynamic clusters without instantiating multiple flux providers, while using a kubeconfig provided from a data or resource source.

Given the complexities of Terraform providers in modules, it would allow ease of use to provide a kubeconfig on bootstrap.

TLDR;

provide kube_config and kube_config_path fields in bootstrap_git if not given in provider config.

Affected Resource(s) and/or Data Source(s)

bootstrap_git

Potential Terraform Configuration

resource "flux_bootstrap_git" "this" {
  path             = "clusters/${var.cluster}"
  components_extra = ["image-reflector-controller", "image-automation-controller"]
  kube_config      = some_provider.kubernetes.kubeconfig # OR
  kube_config_path = "./some/path/here"
}

References

No response

Would you like to implement a fix?

None

JordanP commented 2 months ago

What's wrong with multiple Flux providers ? Is it because lack of "for_each" on a list of providers ?

BobyMCbobs commented 2 months ago

What's wrong with multiple Flux providers ? Is it because lack of "for_each" on a list of providers ?

@JordanP, providers are only available outside of modules. Having multiple providers per-cluster where the kubeconfig (or values) is fed through outputs into the top-level flux provider for that cluster is clunky.

like this (example)

module "cluster-somek8s" {
  source = "./modules/a-cluster-config"
}

provider "flux" {
  alias = "somek8s"
  kubernetes = {
    host                   = module.somek8s.host
    client_certificate     = module.somek8s.cert
    client_key             = module.somek8s.key
    cluster_ca_certificate = module.somek8s.ca
  }
}

module "flux-somek8s" {
  source = "./modules/a-flux-deploy"
  provider = {
    flux = flux.somek8s
  }

  depends_on = [module.cluster-somek8s] # NOTE afaik this is hard to make this module depend on the cluster being up
}

I'd like to be able to have a module for a cluster where defining a cluster also includes Flux, without top-level config needing to be added. This limiting the number of steps to get components up.

Please correct me if you think there's a better way to use the tooling.

If this were possible, it could be able to do something like this (example)

provider "flux" {}

variable "github-token" {}

module "cluster" {
  for_each = toset(["sfo", "syd", "fra"])
  source = "./modules/a-cluster-config-with-flux"

  region = each.key
  github-token = var.github-token

  provider = {
    flux = flux
  }
}

Let me know your thoughts.

swade1987 commented 2 months ago

@BobyMCbobs I've previously solved this issue using the following approach:

  1. Create a Terraform module called k8s-bootstrapped.
  2. This module does two things: a. Constructs the Kubernetes cluster (using its own k8s module). b. Uses the output from the k8s module to feed into the Flux bootstrap process.

This approach is similar to the examples in this repository

To implement this solution, you would use the k8s-bootstrapped module as the main calling module in your Terraform configuration.

swade1987 commented 1 month ago

@BobyMCbobs how did you get on with my proposal above?

BobyMCbobs commented 1 month ago

@swade1987, thank you for your message. Apologies for the late response.

From what I understand, and please correct me if I'm wrong, the case is that in order to use such a module a new Flux provider will need to be instantiated each time a new cluster is created. The Flux provider will then need to be passed through and this like

module "cluster-1" {
  ...
}
provider "flux" {
  alias = "cluster-1"
  kubernetes = {
    host                   = module.cluster-1.kubeconfig_host
    client_certificate     = base64decode(module.cluster-1.kubeconfig_client_certificate)
    client_key             = base64decode(module.cluster-1.kubeconfig_client_key)
    cluster_ca_certificate = base64decode(module.cluster-1.kubeconfig_ca_certificate)
  }
  git = {
    url = "ssh://git@github.com/${var.github_org}/${var.github_repository}.git"
    ssh = {
      username    = "git"
      private_key = tls_private_key.flux.private_key_pem
    }
  }
}
module "flux-bootstrap" {
  providers = {
    flux = flux.cluster-1
  }
}

What I'm really after is

module "cluster-1" {
  ...
  provider = {
    flux = flux
  }
}
provider "flux" {}

where the Flux provider kubernetes values can be specified in the flux_bootstrap_git resource.

swade1987 commented 1 month ago

I wanted to provide a quick update on my availability as a maintainer. I pride myself on transparency and realise my standards have slipped in the last few months.

Since starting a full-time role in September, I have significantly less time to dedicate to this project. I'm now working on issues and pull requests out of hours on a best-effort basis.

Please bear with me if I take longer than usual to respond or review. I remain committed to the project and appreciate your patience and understanding during this transition.

Thank you for your continued support and contributions. I want you to know that I'm committed to improving my communication.

tim-harmon commented 2 weeks ago

has there been any movement on this. as a platform engineer I too would like to move the cluster config out of the providers block. as my provider does not know what my cluster configuration is due to the config being in a tfvars file. This is creating a blocker for my implementation of Flux CD.

JordanP commented 2 weeks ago

@tim-harmon I am pretty sure if there is some movement on this, someone will report it in this Github issue. In the mean time, if you or your company need this feature, perhaps you could consider contributing to this open source project via a pull request ? It could be the fastest and surest way forward and the community will thank you. :pray:

swade1987 commented 2 weeks ago

@JordanP firstly, thanks for your comment here. I appreciate it.

@tim-harmon, The idea behind the provider is that it's a 1:1 mapping with the cluster it is managing. Therefore, the recommended approach is as mentioned above.