weaveworks / weave-gitops-enterprise

This repo provides the enterprise level features for the weave-gitops product, including CAPI cluster creation and team workspaces.
https://docs.gitops.weave.works/
Apache License 2.0
160 stars 30 forks source link

[SPIKE] Investigate using TF Controller with WGE - Part 2 #521

Closed Himangini closed 2 years ago

Himangini commented 2 years ago

Description

As part of the Terraform with WGE initiative, we want to understand what a sample TF template can possibly be. The general idea is that a user will use a template and provide values for the placeholders. The TF controller will then take this and merge the values into the template, producing some valid terraform file(s) to commit to git through WGE.

So this spike involves writing up a template that can be used by a user. No functionality involved, just a template sketch ie., write up a yaml file. The template will have TF resources with placeholders and a file that contains those values.

Note- We might potentially need Part 1 to be finished first for this ticket to be picked up.

Outcome

Timebox

1-2 days

aclevername commented 2 years ago

might be worth looking at https://carvel.dev/ytt/

Skarlso commented 2 years ago

might be worth looking at https://carvel.dev/ytt/

In what context? Sorry, I thought this is for terraform stuff, how does yaml come into it? :D

aclevername commented 2 years ago

might be worth looking at https://carvel.dev/ytt/

In what context? Sorry, I thought this is for terraform stuff, how does yaml come into it? :D

:facepalm:

Skarlso commented 2 years ago

Terraform template:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.0"
    }
  }
}

variable "cluster_identifier" {}
variable "database_name" {}
variable "master_username" {}
variable "master_password" {}
variable "backup_retention_period" {}
variable "region" {}
variable "s3_import_bucket" {}

provider "aws" {
  region = var.region
}

locals {
  engine         = "aurora-postgresql"
  engine_version = "5.7.mysql_aurora.2.07.5"
  port           = 3306
}

resource "aws_s3_bucket" "s3" {
  bucket = var.s3_import_bucket
  acl    = "private"
}

resource "aws_s3_bucket_object" "init_sql" {
  bucket = aws_s3_bucket.s3.id
  key    = "init.sql.tar.gz"
  source = "${path.module}/init.sql.tar.gz00"
  etag   = filemd5("${path.module}/init.sql.tar.gz00")
}

resource "aws_rds_cluster" "mycluster" {
  cluster_identifier      = var.cluster_identifier
  engine                  = local.engine
  engine_version          = local.engine_version
  port                    = local.port
  availability_zones      = ["us-west-2a", "us-west-2b", "us-west-2c"]
  database_name           = var.database_name
  master_username         = var.master_username
  master_password         = var.master_password
  backup_retention_period = var.backup_retention_period
  skip_final_snapshot     = true
  apply_immediately       = true

  s3_import {
    source_engine         = "mysql"
    source_engine_version = "5.7"
    bucket_name           = aws_s3_bucket.s3.id
    ingestion_role        = aws_iam_role.s3_rds.arn
  }
}

resource "aws_rds_cluster_instance" "cluster_instance" {
  count              = 1
  identifier         = "${aws_rds_cluster.mycluster.id}-${count.index}"
  cluster_identifier = aws_rds_cluster.mycluster.id
  instance_class     = "db.t3.small"
  engine             = aws_rds_cluster.mycluster.engine
  engine_version     = aws_rds_cluster.mycluster.engine_version
}

resource "aws_iam_role" "s3_rds" {
  name_prefix = "rds-s3-integration-role-"

  assume_role_policy = <<EOF
{
     "Version": "2012-10-17",
     "Statement": [
       {
         "Effect": "Allow",
         "Principal": {
            "Service": "rds.amazonaws.com"
          },
         "Action": "sts:AssumeRole"
       }
     ]
   }
EOF
}

resource "aws_iam_role_policy" "s3_rds" {
  name_prefix = "rds-s3-integration-policy-"
  role        = aws_iam_role.s3_rds.name

  policy = <<EOF
{
    "Version": "2012-10-17",
    "Statement": [
      {
          "Effect": "Allow",
          "Action": "s3:ListAllMyBuckets",
          "Resource": "*"
      },
      {
          "Effect": "Allow",
          "Action": [
              "s3:ListBucket",
              "s3:GetBucketACL",
              "s3:GetBucketLocation"
          ],
          "Resource": "arn:aws:s3:::${aws_s3_bucket.s3.bucket}"
      },
      {
          "Effect": "Allow",
          "Action": [
              "s3:GetObject",
              "s3:PutObject",
              "s3:ListMultipartUploadParts",
              "s3:AbortMultipartUpload"
          ],
          "Resource": "arn:aws:s3:::${aws_s3_bucket.s3.bucket}/*"
      }
    ]
}
EOF
}

And the Terraform object:

---
apiVersion: infra.contrib.fluxcd.io/v1alpha1
kind: Terraform
metadata:
  name: tf-controller-aurora
  namespace: flux-system
spec:
  interval: 1h
  path: ./_artifacts/aurora
  approvePlan: "auto"
  vars:
  - name: cluster_identifier
    value: "super-awesome-aurora"
  - name: database_base
    value: "super-awesome-db-name"
  - name: backup_retention_period
    value: 5
  - name: region
    value: "us-west-2"
  - name: s3_import_bucket
    value: "my-super-amazing-aurora-instance-import-bucket-777"
  varsFrom:
  - kind: Secret
    name: aurora-vars
  sourceRef:
    kind: GitRepository
    name: flux-system
    namespace: flux-system
Skarlso commented 2 years ago

The above basically creates an Aurora instance which will import some data from a bucket that contains an s3 init SQL file.

Will link the Terraform resource in a sec too.

Skarlso commented 2 years ago

Done.

aclevername commented 2 years ago

@Skarlso so is the idea that the user would provide via flags/GUI, e.g. gitops add terrafom --name foo --template aurora --var cluster_identifier=baz,database_base=mydb or the below via a GUI

  - name: cluster_identifier
    value: "super-awesome-aurora"
  - name: database_base
    value: "super-awesome-db-name"
  - name: backup_retention_period
    value: 5
  - name: region
    value: "us-west-2"
  - name: s3_import_bucket
    value: "my-super-amazing-aurora-instance-import-bucket-777"

and we'd generate the terraform resource yaml, and point it at the static tf template file?

Skarlso commented 2 years ago

Whichever. The values are overrideable either via the GUI or via the Secret. The Secret takes precedence.