At an early stage of development, it was thought that this repository would provide more general tools for devs to create kubernetes clusters. Over its lifetime, though, it has become clear that we're not going to be proliferating many clusters, and that even when we do create new cluster deployments, there will always be a common core of functionality that we then hang applications off of.
As a result of this history, most of the functionality of the 0-hardware stage is actually delegated to the infrastructure module. This is not a critical failure, causing only mild irritation in the development cycle. What is more troublesome is that examples of creating these clusters often rely on using the kubernetes provider to finish the configuration of the cluster, and this relies on using outputs from the EKS module. There is a circularity here in terms of dependencies, and it is usually considered bad form to have provider configurations be dependent on the output of resources anyhow.
I solved this, originally, by separating out non-application setup into the 0-hardware and 1-services stages. This works but is annoying. It's also a bit of an incomplete solution, since some k8s configuration has to be done to complete the configuration of the cluster itself. This is most noticeable with respect to managing the list of users/roles to give access to the cluster, which can require the creation of (Cluster)RoleBindings, which would have to be done using a provider (either kubernetes or kubectl) which relies on the outputs of the EKS module.
I'm offering this PR as a means to simplify the structure of this repository, but it's not without it's problems. It does not eliminate the circular dependency raised by the need to supply configurations to whichever k8s provider is in use. It also requires a complete tear-down of any existing cluster using the old module-based code. Trying to simply apply this to an existing cluster will break everything, because the old cluster will be torn down as part of the change.
These are not insignificant costs, and it's not at all clear that this is worth the trouble of pulling down running clusters just to get a simpler tree. Furthermore, we didn't solve the problem of a complex startup; it's still convoluted:
Build the 0-hardware stage with the cold_start variable set to true
Build the 1-services stage (which creates the role bindings that will be needed for user/role management)
Rebuild the 0-hardware stage with cold_start set to false
Deploy whichever application modules are desired
For new deployments, this will probably seem worth it, but for old deployments, it's not worth the squeeze. As such, this PR should probably sit in draft status until it appears that it is worth the pain. New deployments can feel free to use it as it does seem like it will be merged in the long-term.
At an early stage of development, it was thought that this repository would provide more general tools for devs to create kubernetes clusters. Over its lifetime, though, it has become clear that we're not going to be proliferating many clusters, and that even when we do create new cluster deployments, there will always be a common core of functionality that we then hang applications off of.
As a result of this history, most of the functionality of the
0-hardware
stage is actually delegated to theinfrastructure
module. This is not a critical failure, causing only mild irritation in the development cycle. What is more troublesome is that examples of creating these clusters often rely on using thekubernetes
provider to finish the configuration of the cluster, and this relies on using outputs from the EKS module. There is a circularity here in terms of dependencies, and it is usually considered bad form to have provider configurations be dependent on the output of resources anyhow.I solved this, originally, by separating out non-application setup into the
0-hardware
and1-services
stages. This works but is annoying. It's also a bit of an incomplete solution, since some k8s configuration has to be done to complete the configuration of the cluster itself. This is most noticeable with respect to managing the list of users/roles to give access to the cluster, which can require the creation of (Cluster)RoleBindings, which would have to be done using a provider (eitherkubernetes
orkubectl
) which relies on the outputs of the EKS module.I'm offering this PR as a means to simplify the structure of this repository, but it's not without it's problems. It does not eliminate the circular dependency raised by the need to supply configurations to whichever k8s provider is in use. It also requires a complete tear-down of any existing cluster using the old module-based code. Trying to simply apply this to an existing cluster will break everything, because the old cluster will be torn down as part of the change.
These are not insignificant costs, and it's not at all clear that this is worth the trouble of pulling down running clusters just to get a simpler tree. Furthermore, we didn't solve the problem of a complex startup; it's still convoluted:
0-hardware
stage with thecold_start
variable set to true1-services
stage (which creates the role bindings that will be needed for user/role management)0-hardware
stage with cold_start set to falseFor new deployments, this will probably seem worth it, but for old deployments, it's not worth the squeeze. As such, this PR should probably sit in draft status until it appears that it is worth the pain. New deployments can feel free to use it as it does seem like it will be merged in the long-term.
Closes #9 Closes #21 Closes #22