Mirantis / hmc

Apache License 2.0
23 stars 18 forks source link

Create an instruction for adding provider templates to Project 2A #636

Open slysunkin opened 2 weeks ago

slysunkin commented 2 weeks ago

Create an instruction for adding provider templates to Project 2A:

  1. Based on CAPI instructions from: https://cluster-api.sigs.k8s.io/user/quick-start.html
  2. Making decisions on control plane selection
  3. Converting generated CAPI yaml to Project 2A templates
  4. Running and testing recommendations
slysunkin commented 1 week ago

Introduction

To generate YAML templates for a CAPI (Cluster API) provider, you'll typically need to define YAML files that describe the infrastructure, cluster, and machines using the CAPI resources and the specific provider's CRDs (Custom Resource Definitions).

Steps to Generate YAML Templates for CAPI Provider:

  1. Understand the CAPI Resources: CAPI uses several key resources:

    • Cluster: Describes the entire cluster.
    • Machine: Represents a single machine instance.
    • MachineSet: A set of machines with a defined size.
    • MachineDeployment: A set of machines with deployment-like functionality.
    • KubeadmControlPlane (for Kubeadm-based providers): Describes the control plane for the cluster.
    • InfrastructureProvider: Describes the infrastructure provider-specific resources, like VMs or clouds (AWS, Azure, GCP, etc.).
  2. Choose Your CAPI Provider: CAPI works with different infrastructure providers. Each provider (e.g., AWS, Azure, vSphere, etc.) has its own set of resources, configurations, and CRDs. Some of the popular CAPI providers are:

    • Cluster API - AWS (cluster-api-provider-aws)
    • Cluster API - Azure (cluster-api-provider-azure)
    • Cluster API - vSphere (cluster-api-provider-vsphere)
    • Cluster API - GCP (cluster-api-provider-gcp)

    Depending on your provider, you'll need to install the necessary CRDs for that provider.

  3. Install the Necessary CRDs: Install the CRDs (Custom Resource Definitions) from the relevant CAPI provider. For example, for AWS, you would run:

    kubectl apply -k github.com/kubernetes-sigs/cluster-api-provider-aws/config/crd/
  4. Create a Cluster YAML Template: You will define a Cluster resource first. Below is an example of how you would define a Cluster YAML for a specific provider.

    Cluster YAML (for AWS):

    apiVersion: cluster.x-k8s.io/v1alpha4
    kind: Cluster
    metadata:
     name: my-cluster
     namespace: default
    spec:
     infrastructureRef:
       apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
       kind: AWSCluster
       name: my-cluster-infra
  5. Create Infrastructure YAML Template: Define the infrastructure specific to your provider (e.g., for AWS, you will create an AWSCluster resource). Here’s an example for AWS:

    AWSCluster YAML (for AWS):

    apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
    kind: AWSCluster
    metadata:
     name: my-cluster-infra
     namespace: default
    spec:
     region: us-west-2
     controlPlaneConfiguration:
       instanceType: t3.medium
     nodePools:
       - name: worker-node-pool
         instanceType: t3.medium
         minCount: 2
         maxCount: 5
         replicas: 3
  6. Create Machine YAML Templates: Each machine in your cluster will be represented as a Machine resource. Below is an example of a Machine resource template:

    Machine YAML (for AWS):

    apiVersion: cluster.x-k8s.io/v1alpha4
    kind: Machine
    metadata:
     name: my-cluster-machine
     namespace: default
    spec:
     bootstrap:
       configRef:
         name: my-cluster-bootstrap
     infrastructureRef:
       apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
       kind: AWSMachine
       name: my-cluster-machine-infra
  7. Configure KubeadmControlPlane (for Kubeadm-based providers): If you are using a Kubeadm-based control plane (e.g., for AWS), you will define a KubeadmControlPlane resource, which defines the number of control plane nodes and configuration.

    KubeadmControlPlane YAML (for AWS):

    apiVersion: controlplane.cluster.x-k8s.io/v1alpha4
    kind: KubeadmControlPlane
    metadata:
     name: my-cluster-control-plane
     namespace: default
    spec:
     version: v1.24.0
     replicas: 3
     infrastructureRef:
       apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
       kind: AWSMachine
       name: my-cluster-control-plane-infra
  8. Generate MachineSet or MachineDeployment: Optionally, you can define MachineSet or MachineDeployment if you want to scale machines. This is useful for scaling your worker nodes.

    MachineSet YAML (for AWS):

    apiVersion: cluster.x-k8s.io/v1alpha4
    kind: MachineSet
    metadata:
     name: my-cluster-machineset
     namespace: default
    spec:
     replicas: 3
     selector:
       matchLabels:
         cluster.x-k8s.io/cluster-name: my-cluster
     template:
       metadata:
         labels:
           cluster.x-k8s.io/cluster-name: my-cluster
       spec:
         bootstrap:
           configRef:
             name: my-cluster-bootstrap
         infrastructureRef:
           apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
           kind: AWSMachine
           name: my-cluster-machine-infra

9. Apply the YAML Templates:

After you have your YAML templates defined, apply them to your Kubernetes cluster using kubectl:

kubectl apply -f cluster.yaml
kubectl apply -f awscluster.yaml
kubectl apply -f machine.yaml
kubectl apply -f kubeadmcontrolplane.yaml

Notes:

Conclusion:

More steps will be described further.

slysunkin commented 1 week ago

Example steps for EKS provider

Useful resources: Kubernetes Cluster API Quick Start Kubernetes Cluster API Provider AWS Creating a EKS cluster

export AWS_ACCESS_KEY_ID="<KEY-ID>"
export AWS_SECRET_ACCESS_KEY="<ACCESS-KEY>"
export AWS_SESSION_TOKEN="<TOKEN>"
export AWS_REGION=<REGION>
export AWS_B64ENCODED_CREDENTIALS=$(./bin/clusterawsadm bootstrap credentials encode-as-profile)
export AWS_NODE_MACHINE_TYPE=<MACHINE-TYPE-SIZE>
export AWS_SSH_KEY_NAME=<SSK_KEY-NAME>
export KUBERNETES_VERSION=<VERSION>

clusterctl init --infrastructure aws

clusterctl generate cluster managed-test --flavor eks > capi-eks.yaml

CAPI definitions (example) w/o templates:

apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
  name: managed-test
  namespace: default
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
      - 192.168.0.0/16
  controlPlaneRef:
    apiVersion: controlplane.cluster.x-k8s.io/v1beta2
    kind: AWSManagedControlPlane
    name: managed-test-control-plane
  infrastructureRef:
    apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
    kind: AWSManagedCluster
    name: managed-test
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSManagedCluster
metadata:
  name: managed-test
  namespace: default
spec: {}
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta2
kind: AWSManagedControlPlane
metadata:
  name: managed-test-control-plane
  namespace: default
spec:
  region: us-east-2
  sshKeyName: slysunkin
  version: 1.30.0
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
  name: managed-test-md-0
  namespace: default
spec:
  clusterName: managed-test
  replicas: 0
  selector:
    matchLabels: null
  template:
    spec:
      bootstrap:
        configRef:
          apiVersion: bootstrap.cluster.x-k8s.io/v1beta2
          kind: EKSConfigTemplate
          name: managed-test-md-0
      clusterName: managed-test
      infrastructureRef:
        apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
        kind: AWSMachineTemplate
        name: managed-test-md-0
      version: 1.30.0
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSMachineTemplate
metadata:
  name: managed-test-md-0
  namespace: default
spec:
  template:
    spec:
      iamInstanceProfile: nodes.cluster-api-provider-aws.sigs.k8s.io
      instanceType: t3.small
      sshKeyName: slysunkin
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta2
kind: EKSConfigTemplate
metadata:
  name: managed-test-md-0
  namespace: default
spec:
  template: {}

Role definitions:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: capa-eks-control-plane-system-capa-eks-control-plane-manager-role
  labels:
    cluster.x-k8s.io/provider: control-plane-aws-eks
    clusterctl.cluster.x-k8s.io: ''
rules:
  - verbs:
      - create
      - delete
      - get
      - list
      - patch
      - update
      - watch
    apiGroups:
      - ''
    resources:
      - secrets
  - verbs:
      - get
      - list
      - watch
    apiGroups:
      - cluster.x-k8s.io
    resources:
      - clusters
      - clusters/status
      - machinedeployments
      - machinedeployments/status
  - verbs:
      - create
      - delete
      - get
      - list
      - patch
      - update
      - watch
    apiGroups:
      - controlplane.cluster.x-k8s.io
    resources:
      - awsmanagedcontrolplanes
  - verbs:
      - get
      - patch
      - update
    apiGroups:
      - controlplane.cluster.x-k8s.io
    resources:
      - awsmanagedcontrolplanes/status
  - verbs:
      - get
      - list
      - watch
    apiGroups:
      - infrastructure.cluster.x-k8s.io
    resources:
      - awsmanagedclusters
      - awsmanagedclusters/status
      - awsmachinetemplates
      - awsmachinetemplates/status

To be continued...

slysunkin commented 1 week ago

@bnallapeta , I've added a couple of articles on CAPI provider development

slysunkin commented 1 week ago

EKS Templates

[!IMPORTANT] The corner stone of CAPI provider templates is the selection of control plane. In case of EKS (or GCP, for example) the original control plane should be retained (AWSManagedControlPlane for EKS, GCPManagedControlPlane for GCP). For AWS, Azure, etc. - K0s or K0smotron control plane should be used.

Generated YAML for EKS could be converted into 2A template in a straightforward way:

templates/cluster/aws-eks
├── Chart.yaml
├── templates
│   ├── _helpers.tpl
│   ├── awsmachinetemplate-worker.yaml
│   ├── awsmanagedcluster.yaml
│   ├── awsmanagedcontrolplane.yaml
│   ├── cluster.yaml
│   ├── eksconfigtemplate.yaml
│   └── machinedeployment.yaml
├── values.schema.json
└── values.yaml

After applying templated values on all cluster object definitions a new template is pretty much ready.

To be continued...