holos-run / holos

Holos - The Holistic platform manager
https://holos.run
Apache License 2.0
19 stars 0 forks source link

api: move #Resources to package holos #241

Closed jeffmccune closed 1 month ago

jeffmccune commented 1 month ago

Previously, the #Resources struct listing valid resources to use with APIObjects in each of the components types was closed. This made it very difficult for users to mix in new resources and use the Kubernetes component kind.

This patch moves the definition of the valid resources to package holos from the schema API. The schema still enforces some light constraints, but doesn't keep the struct closed.

A new convention is introduced in the form of configuring all components using _ComponentConfig defined at the root, then unifying this struct with all of the component kinds. See schema.gen.cue for how this works.

This approach enables mixing in ArgoCD applications to all component kinds, not just Helm as was done previously. Similarly, the user-constrained #Resources definition unifies with all component kinds.

It's OK to leave the yaml.Marshall in the schema API. The user shouldn't ever have to deal with #APIObjects, instead they should pass Resources through the schema API which will use APIObjects to create apiObjectMap for each component type and the BuildPlan.

This is still more awkward than I want, but it's a good step in the right direction.

Closes: #237

cloudflare-workers-and-pages[bot] commented 1 month ago

Deploying holos with  Cloudflare Pages  Cloudflare Pages

Latest commit: f58d791
Status: ✅  Deploy successful!
Preview URL: https://126249e2.holos.pages.dev
Branch Preview URL: https://jeff-237-move-yaml-marshal.holos.pages.dev

View logs

jeffmccune commented 1 month ago

We can add arbitrary resources when #Resources is left open by the user.

cue export --out yaml ./components/namespaces
kind: BuildPlan
apiVersion: v1alpha3
spec:
  components:
    kubernetesObjectsList:
      - kind: KubernetesObjects
        apiVersion: v1alpha3
        metadata:
          name: namespaces
        apiObjectMap:
          Jeff:
            foo: |
              kind: Jeff
              metadata:
                name: foo
        skip: false
diff --git a/components/namespaces/namespaces.cue b/components/namespaces/namespaces.cue
index a6cc0f2..bee6810 100644
--- a/components/namespaces/namespaces.cue
+++ b/components/namespaces/namespaces.cue
@@ -2,7 +2,8 @@ package holos

 let Objects = {
        Name: "namespaces"
-       Resources: Namespace: #Namespaces
+       // Resources: Namespace: #Namespaces
+       Resources: Jeff: foo: _
 }

 // Produce a kubernetes objects build plan.

resources.cue

package holos

import (
    corev1 "k8s.io/api/core/v1"
    appsv1 "k8s.io/api/apps/v1"
    rbacv1 "k8s.io/api/rbac/v1"
    batchv1 "k8s.io/api/batch/v1"

    ci "cert-manager.io/clusterissuer/v1"
    rgv1 "gateway.networking.k8s.io/referencegrant/v1beta1"
    certv1 "cert-manager.io/certificate/v1"
    hrv1 "gateway.networking.k8s.io/httproute/v1"
    gwv1 "gateway.networking.k8s.io/gateway/v1"
)

#Resources: {
    [Kind=string]: [InternalLabel=string]: {
        kind: Kind
        metadata: name: string | *InternalLabel
    }

    Certificate: [_]:        certv1.#Certificate
    ClusterIssuer: [_]:      ci.#ClusterIssuer
    ClusterRole: [_]:        rbacv1.#ClusterRole
    ClusterRoleBinding: [_]: rbacv1.#ClusterRoleBinding
    ConfigMap: [_]:          corev1.#ConfigMap
    CronJob: [_]:            batchv1.#CronJob
    Deployment: [_]:         appsv1.#Deployment
    HTTPRoute: [_]:          hrv1.#HTTPRoute
    Job: [_]:                batchv1.#Job
    Namespace: [_]:          corev1.#Namespace
    ReferenceGrant: [_]:     rgv1.#ReferenceGrant
    Role: [_]:               rbacv1.#Role
    RoleBinding: [_]:        rbacv1.#RoleBinding
    Service: [_]:            corev1.#Service
    ServiceAccount: [_]:     corev1.#ServiceAccount
    StatefulSet: [_]:        appsv1.#StatefulSet

    Gateway: [_]: gwv1.#Gateway & {
        spec: gatewayClassName: string | *"istio"
    }
}

An open question is how to close this down now, but that's for another day and it's not clear it's necessary.