kbst / terraform-kubestack

Kubestack is a framework for Kubernetes platform engineering teams to define the entire cloud native stack in one Terraform code base and continuously evolve the platform safely through GitOps.
https://www.kubestack.com
Apache License 2.0
652 stars 92 forks source link

Extending tf definitions of kubestack clusters with custom requirements #136

Open pijemcolu opened 4 years ago

pijemcolu commented 4 years ago

Currently none of the kubestack modules have any outputs. This might make extending the clusters with custom terraform code difficult if one also wishes not to touch the internals in order to provision custom resources around the cluster.

There's a couple questions really:

  1. How do we envision extending the cluster with custom terraform declarations?
  2. How do we upgrade the kubestack version?
  3. Should we start implementing outputs similar to the output proposed in https://github.com/kbst/terraform-kubestack/pull/133 ?

I envision upgrades being a rather manual process, maybe git merge upstream, keeping kbst/terraform as the upstream. At the same time I'd envision a single extensions.tf using the proposed outputs in order to extend the clusters with custom terraform declarations.

pst commented 4 years ago
1. How do we envision extending the cluster with custom terraform declarations?

I think it depends on what type of declarations. High level, the goal would need to be to cover the most common configurations with the cluster modules. There may be certain configurations that will require forking and replacing the cluster module. Possibly, as the user community and use-cases grow maybe there will be variants for different use-cases of the cluster modules. Similarly to how I implemented the local dev env with the cluster-local variant of each cluster module.

2. How do we upgrade the kubestack version?

Currently, upgrades require bumping the version in clusters.tf and Dockerfile*. I document specific requirements in the upgrade notes of each release. https://github.com/kbst/terraform-kubestack/releases

The new CLI kbst that provides the local development environment, also has a feature to scaffold new repositories. Similarly, I'd like it to assist in upgrading existing repositories. It does have install, remove and update functionality for manifests from the catalog already.

$ kbst -h
Kubestack Framework CLI

Usage:
  kbst [command]

Available Commands:
  help        Help about any command
  local       Start a localhost development environment
  manifest    Add, update and remove services from the catalog
  repository  Create and change Kubestack repositories

Flags:
  -h, --help          help for kbst
  -p, --path string   path to the working directory (default ".")

Use "kbst [command] --help" for more information about a command.
3. Should we start implementing outputs similar to the output proposed in #133 ?

I've been avoiding getting into this so far, because I think it should be well thought through and I didn't feel of making that decision alone. I'd appreciate a constructive discussion how to drive this forward in a way that works for the three supported providers and has a decent chance of not needing a breaking change in the next release already.

I envision upgrades being a rather manual process, maybe git merge upstream, keeping kbst/terraform as the upstream. At the same time I'd envision a single extensions.tf using the proposed outputs in order to extend the clusters with custom terraform declarations.

The user repositories are scaffolded from the starters built for each release. The starters are built during release from /quickstart/src. A git merge would not work. That's why I was careful to limit update requirements to changing the versions in clusters.tf and in Dockerfile* and hope to provide a even easier UX with the CLI.

jeacott1 commented 2 years ago

has this advice changed at all re kubestack upgrades? took me a while to find this issue, couldn't find any specific advice in the published guide (but perhaps its there somewhere?). it seems like perhaps service module versions might also need updating? ie

  source  = "kbst.xyz/catalog/argo-cd/kustomization"
  version = "2.0.5-kbst.0"

are these always backward compatible?

re 'extending the cluster with custom terraform declarations" - I'd also love to know for example what is envisioned for adding basic resources like shared disk for example? it would be great if adding a 'resource "azurerm_storage_share"...' somewhere for example could also take advantage of the built in configuration opps/apps/etc mechanism somehow without recreating it all.

pst commented 2 years ago

The cluster service module versions define which upstream version of the service you get. So while yes, you want to update them, it wouldn't be on the framework module version schedule at all.

A bit more info here: https://www.kubestack.com/framework/documentation/cluster-service-modules#module-attributes

If you have suggestions what else you'd like to see in the docs, I'd be happy to hear your thoughts.