rancher / opni

Multi Cluster Observability with AIOps
https://opni.io
Apache License 2.0
338 stars 53 forks source link

Terraform provider for Opni #1384

Open dbason opened 1 year ago

dbason commented 1 year ago

There should be a way to maintain Opni using IaaS principles. We don't recommend interacting directly with the CRDs so we should create a terraform provider that interacts with the gateway API.

jan-law commented 1 year ago

Terraform files and documentation to install and upgrade Opni and Opni Agent: https://github.com/jan-law/terraform-install-opni

ron1 commented 1 year ago

Opni is a Kubernetes native application often targetting kubernetes platform engineering teams that must manage significant numbers of kubernetes clusters. The defacto standard for managing kubernetes workloads leverages GitOps tools like Fleet, ArgoCD, and FluxCD to apply declarative core kubernetes manifests and crds to kubernetes clusters. Kubernetes platform engineering teams are adopting tools like cluster api, crossplane, and even the flux terraform controller to declaratively manage infrastructure just like they manage kubernetes native workloads.

So consider embracing the use of crds to declaratively manage the complex set of components that make up the opni server rather than recommending against their usage. Platform engineers want to use a crd rather than a gui or command line tool to tune cortex, for example. They also want access to the opster opensearch custom resource so that they can tune opensearch for different target environments.

Rather than relying on a legacy IaaC tool like Terraform to connect opni agent clusters to the opni server cluster, could this be accomplished in a more kubernetes-native way by using some glue code along with a resource like the external secrets operator PushSecret that can securely pass around secrets containing bootstrap tokens and certificate pins? It would be great to provide an example of how Fleet or Argocd could use the existing helm charts in a declarative manner to deploy the Opni server to a downstream admin cluster and deploy w/registration opni agents to the rancher mgmt cluster and one or more downstream user clusters.