Open ayushin opened 1 year ago
hey there @ayushin,
I'm thinking of doing a heroku-like approach to those attached resources by handling them as addons
.
instead of using the terraform module like this:
module "app_01" {
source = "git@github.com:djangoflow/terraform-kubernetes-django?ref=main"
...
postgres_enabled = true
postgres_storage_size = 10Gi
postgres_resources_requests_memory = 256Mi
postgres_resources_requests_cpu = 250m
...
}
we would use it like this:
module "app_01" {
source = "git@github.com:djangoflow/terraform-kubernetes-django?ref=main"
//...
addons = {
db = {
addon_type = "kubernetes_postgres"
storage = "10Gi"
memory = "256Mi"
cpu = "250m"
memory_limit = null
cpu_limit = null
}
}
//...
}
which would allow us to generalize the application provisioning by wrapping most of the data in a yaml file like so
app:
addons:
db:
type: "kubernetes.postgres"
storage: "10Gi"
cpu: "250m"
memory: "256Mi"
bucket:
type: "gcp.bucket"
name: files.demo.djangoflow.com
public_storage: true # uses the dns module under the hood
ingress:
type: "kubernetes.ingres"
ruleset: {
# uses the dns module under the hood
"api.demo.djangoflow.com": {
"/": "api"
}
# same idea for {"kubernetes.redis","gcp.database","aws.s3"}
and using that with terraform would be:
module "app_01" {
source = "git@github.com:djangoflow/terraform-kubernetes-django?ref=main"
//...
addons = yamldecode("${path.cwd}/app.yaml")
//...
}
To achieve that I'd just move the project files into the appropriate modules, I'd push back against the idea of having all of those modules as different repositories as that would create unnecessary chore.
We can always easily extract modules if we ever need to reuse them, or we could just allow other project to pull the modules directly from this repo using
module "kubernetes_database" "db" {
source = "git@github.com:djangoflow/terraform-kubernetes-django.git//modules/postgres?ref=v1.2.3"
}
or even publishing those modules as independent modules by using a hosted registry.
How does that sound?
I wonder how this approach would work in terms of required providers?
I can deploy django on kubernetes and then either use aws S3 or GCP GCS and provision DNS records either via Cloudflare or Route66
Same with the CloudSQL vs RDS
the addon type is different for the cloud
e.g. aws.bucket
or gcp.bucket
and we pass the choice of dns provider as property
then we have route53 and cloudflare modules to create the records, but they are created with if statements, so we can safely create a misconfigured cloudflare provider that will never be invoked
provider "cloudflare" {
api_key = ""
email = ""
}
module "app_01" {
source = "git@github.com:djangoflow/terraform-kubernetes-django?ref=main"
//...
addons = {
files = {
addon_type = "aws.s3"
name = "files.djangoflow.example.com"
dns = "cloudflare"
}
}
//...
}
# ./modules/terraform-cloudflare-records
// tf code
# ./modules/terraform-route53-records
// tf code
# ./modules/terraform-aws-s3
resource "aws_s3_bucket" "main" {
//...
}
module "route53_dns" {
count = var.dns_type == "route53" ? 1 : 0
source "../terraform-aws-route53-records"
//...
}
module "cloudflare_dns" {
count = var.dns_type == "cloudflare" ? 1 : 0
source "../terraform-cloudflare-records"
//...
}
We need to refactor the functionality between different modules and split them into separate repositories so that we can have fine grained control of ingress, databases, storage and dns at the project level.
This repository can be one of the standard cases how to use those modules in concerto, we should also create examples how to customise deployment to finer details.