The core function of this entire repository is to describe an Azavea-standard Kubernetes deployment. What are the hardware, system, and application requirements that meet our minimum needs? What kind of interface for deploying this (potentially complex) infrastructure should we provide? How should users of this infrastructure interact with it as they deploy their own custom applications? These are some of the questions that we will need to answer in due time as we learn more about this space.
This PR is an initial answer to how we should proceed, suggesting one potential method for deploying a Kubernetes cluster to AWS. The approach taken here is an iteration on a Terraform-based deployment that has been used for other projects at Azavea, which is hopefully a refinement of those previous efforts, eventually offering a standard cluster architecture that can be targeted by application-specific repositories that wish to place resources onto company-wide clusters. The shape of these standard clusters is emerging, but not yet fixed. See the documentation in this repo for more details.
The Terraform code in this PR has been segmented into several stages: (1) configuring the cluster hardware and API access, including OIDC for IRSA and some RBAC setup; (2) setting up basic cluster services, in this case, Karpenter, and possibly an ingress controller; and (3) provisioning system-wide applications, beginning with Franklin.
This basic setup is delivered through the use of an updated STRTA infrastructure. The standard approach to deployment will consist of executing cibuild followed by cipublish. The latter script will be aware of both the AWS region that we are targeting and the target environment. (The reference deployment of this system currently lives on us-west-2, with only a staging environment.) I've also improved the script infrastructure for iterating on these deployments, offering a console script that facilitates interaction with the infra script during development.
The basic structures suggested by this PR's contributions are to be considered as a starting point for future discussions as we develop best practices in the future.
This is still a bit WIPpy. Some amount of work is still required to
[x] Figure out how to grant RBAC roles to users (the Terraform EKS module has a sequencing issue with the aws-auth ConfigMap not yet existing when custom user maps are supposed to be applied) Deferred to #9
[x] Create a route53 alias to the Franklin ELB Rolled into #7
[x] Roll infrastructure module into 0-hardware stage Deferred to #10
I've deferred the outstanding tasks in this PR to separate issues so that I can just merge the feature. This code, after all, does work as desired, even if it isn't perfect. We can improve things at a later date.
The core function of this entire repository is to describe an Azavea-standard Kubernetes deployment. What are the hardware, system, and application requirements that meet our minimum needs? What kind of interface for deploying this (potentially complex) infrastructure should we provide? How should users of this infrastructure interact with it as they deploy their own custom applications? These are some of the questions that we will need to answer in due time as we learn more about this space.
This PR is an initial answer to how we should proceed, suggesting one potential method for deploying a Kubernetes cluster to AWS. The approach taken here is an iteration on a Terraform-based deployment that has been used for other projects at Azavea, which is hopefully a refinement of those previous efforts, eventually offering a standard cluster architecture that can be targeted by application-specific repositories that wish to place resources onto company-wide clusters. The shape of these standard clusters is emerging, but not yet fixed. See the documentation in this repo for more details.
The Terraform code in this PR has been segmented into several stages: (1) configuring the cluster hardware and API access, including OIDC for IRSA and some RBAC setup; (2) setting up basic cluster services, in this case, Karpenter, and possibly an ingress controller; and (3) provisioning system-wide applications, beginning with Franklin.
This basic setup is delivered through the use of an updated STRTA infrastructure. The standard approach to deployment will consist of executing
cibuild
followed bycipublish
. The latter script will be aware of both the AWS region that we are targeting and the target environment. (The reference deployment of this system currently lives onus-west-2
, with only astaging
environment.) I've also improved the script infrastructure for iterating on these deployments, offering aconsole
script that facilitates interaction with theinfra
script during development.The basic structures suggested by this PR's contributions are to be considered as a starting point for future discussions as we develop best practices in the future.
This is still a bit WIPpy. Some amount of work is still required to
Figure out how to grant RBAC roles to users (the Terraform EKS module has a sequencing issue with the aws-auth ConfigMap not yet existing when custom user maps are supposed to be applied)Deferred to #9Create a route53 alias to the Franklin ELBRolled into #7Roll infrastructure module intoDeferred to #100-hardware
stage