weaveworks / weave-gitops-enterprise

This repo provides the enterprise level features for the weave-gitops product, including CAPI cluster creation and team workspaces.
https://docs.gitops.weave.works/
Apache License 2.0
160 stars 29 forks source link

[CLI] Adding cluster with profiles doesn't work via command line and there is no corresponding profiles .yaml generated #1228

Open saeedfazal opened 2 years ago

saeedfazal commented 2 years ago

CLI add cluster command doesn't show if the default profiles are going to be installed along with profiles added as part of 'add cluster' command. Moreover the CLI output doesn't provide information for profile layering like UI.

saeed@Saeeds-MBP weave-gitops-enterprise % /usr/local/bin/gitops add cluster --from-template cluster-template-development-0 --set CLUSTER_NAME=cli-end-to-end-capd-cluster-1 --set NAMESPACE=default --set KUBERNETES_VERSION=1.23.3 --set CONTROL_PLANE_MACHINE_COUNT=1 --set WORKER_MACHINE_COUNT=1 --profile 'name=cert-manager,version=0.0.7' --profile 'name=weave-policy-agent,version=0.3.1' --branch "br-cli-end-to-end-capd-cluster-1" --title "CAPD pull request" --url https://gitlab.git.dev.weave.works/wge-test/kind-management --commit-message "CAPD capi template" --description "This PR creates a new CAPD Kubernetes cluster" --username wego-admin --password wego-enterprise --endpoint https://weave.gitops.enterprise.com:30080 --insecure-skip-tls-verify
name=cert-manager
version=0.0.7
name=weave-policy-agent
version=0.3.1
Created pull request: https://gitlab.git.dev.weave.works/wge-test/kind-management/-/merge_requests/1

Firstly the above template has one default profile podinfo but it is not visible if it will be installed. Secondly there will be no corresponding profiles .yaml file created in the config repo once the capi cluster is created and ready to use. NB: When you create cluster via UI using the same template then you will have the corresponding profiles .yamlcreated in the config repository.

Example capi template used:

apiVersion: capi.weave.works/v1alpha1
kind: CAPITemplate
metadata:
  name: cluster-template-development-0
  namespace: default
  annotations:
    capi.weave.works/profile-0: '{"name": "podinfo", "version": "6.0.1"}'
spec:
  description: This is the std. CAPD template 0
  params:
    - name: CLUSTER_NAME
      required: true
      description: This is used for the cluster naming.
    - name: NAMESPACE
      description: Namespace to create the cluster in
    - name: KUBERNETES_VERSION
      description: Kubernetes version to use for the cluster
      options: ["1.19.11", "1.21.1", "1.22.0", "1.23.3"]
    - name: CONTROL_PLANE_MACHINE_COUNT
      description: Number of control planes
      options: ["1", "2", "3"]
    - name: WORKER_MACHINE_COUNT
      description: Number of control planes
  resourcetemplates:
    - apiVersion: gitops.weave.works/v1alpha1
      kind: GitopsCluster
      metadata:
        name: "${CLUSTER_NAME}"
        namespace: "${NAMESPACE}"
        labels:
          weave.works/flux: bootstrap
          weave.works/apps: "capd"
        annotations:
          metadata.weave.works/dashboard.prometheus: https://prometheus.io/
      spec:
        capiClusterRef:
          name: "${CLUSTER_NAME}"
    - apiVersion: cluster.x-k8s.io/v1beta1
      kind: Cluster
      metadata:
        name: "${CLUSTER_NAME}"
        namespace: "${NAMESPACE}"
        labels:
          cni: calico
      spec:
        clusterNetwork:
          pods:
            cidrBlocks:
            - 192.168.0.0/16
          serviceDomain: cluster.local
          services:
            cidrBlocks:
            - 10.128.0.0/12
        controlPlaneRef:
          apiVersion: controlplane.cluster.x-k8s.io/v1beta1
          kind: KubeadmControlPlane
          name: "${CLUSTER_NAME}-control-plane"
          namespace: "${NAMESPACE}"
        infrastructureRef:
          apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
          kind: DockerCluster
          name: "${CLUSTER_NAME}"
          namespace: "${NAMESPACE}"
    - apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
      kind: DockerCluster
      metadata:
        name: "${CLUSTER_NAME}"
        namespace: "${NAMESPACE}"
    - apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
      kind: DockerMachineTemplate
      metadata:
        name: "${CLUSTER_NAME}-control-plane"
        namespace: "${NAMESPACE}"
      spec:
        template:
          spec:
            extraMounts:
            - containerPath: /var/run/docker.sock
              hostPath: /var/run/docker.sock
    - apiVersion: controlplane.cluster.x-k8s.io/v1beta1
      kind: KubeadmControlPlane
      metadata:
        name: "${CLUSTER_NAME}-control-plane"
        namespace: "${NAMESPACE}"
      spec:
        kubeadmConfigSpec:
          clusterConfiguration:
            apiServer:
              certSANs:
              - localhost
              - 127.0.0.1
              - 0.0.0.0
            controllerManager:
              extraArgs:
                enable-hostpath-provisioner: "true"
          initConfiguration:
            nodeRegistration:
              criSocket: /var/run/containerd/containerd.sock
              kubeletExtraArgs:
                cgroup-driver: cgroupfs
                eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
          joinConfiguration:
            nodeRegistration:
              criSocket: /var/run/containerd/containerd.sock
              kubeletExtraArgs:
                cgroup-driver: cgroupfs
                eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
        machineTemplate:
          infrastructureRef:
            apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
            kind: DockerMachineTemplate
            name: "${CLUSTER_NAME}-control-plane"
            namespace: "${NAMESPACE}"
        replicas: "${CONTROL_PLANE_MACHINE_COUNT}"
        version: "${KUBERNETES_VERSION}"
    - apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
      kind: DockerMachineTemplate
      metadata:
        name: "${CLUSTER_NAME}-md-0"
        namespace: "${NAMESPACE}"
      spec:
        template:
          spec: {}
    - apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
      kind: KubeadmConfigTemplate
      metadata:
        name: "${CLUSTER_NAME}-md-0"
        namespace: "${NAMESPACE}"
      spec:
        template:
          spec:
            joinConfiguration:
              nodeRegistration:
                kubeletExtraArgs:
                  cgroup-driver: cgroupfs
                  eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
    - apiVersion: cluster.x-k8s.io/v1beta1
      kind: MachineDeployment
      metadata:
        name: "${CLUSTER_NAME}-md-0"
        namespace: "${NAMESPACE}"
      spec:
        clusterName: "${CLUSTER_NAME}"
        replicas: "${WORKER_MACHINE_COUNT}"
        selector:
          matchLabels: null
        template:
          spec:
            bootstrap:
              configRef:
                apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
                kind: KubeadmConfigTemplate
                name: "${CLUSTER_NAME}-md-0"
                namespace: "${NAMESPACE}"
            clusterName: "${CLUSTER_NAME}"
            infrastructureRef:
              apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
              kind: DockerMachineTemplate
              name: "${CLUSTER_NAME}-md-0"
              namespace: "${NAMESPACE}"
            version: "${KUBERNETES_VERSION}"
bigkevmcd commented 2 years ago

Originally, the annotations were a hint to the UI.

But, I think if we're going to build more on top, we should find space in the CRD...

They could technically be just templates, rather than annotations, which would simplify the CLI and UI.