Open slysunkin opened 2 weeks ago
To generate YAML templates for a CAPI (Cluster API) provider, you'll typically need to define YAML files that describe the infrastructure, cluster, and machines using the CAPI resources and the specific provider's CRDs (Custom Resource Definitions).
Understand the CAPI Resources: CAPI uses several key resources:
Choose Your CAPI Provider: CAPI works with different infrastructure providers. Each provider (e.g., AWS, Azure, vSphere, etc.) has its own set of resources, configurations, and CRDs. Some of the popular CAPI providers are:
cluster-api-provider-aws
)cluster-api-provider-azure
)cluster-api-provider-vsphere
)cluster-api-provider-gcp
)Depending on your provider, you'll need to install the necessary CRDs for that provider.
Install the Necessary CRDs: Install the CRDs (Custom Resource Definitions) from the relevant CAPI provider. For example, for AWS, you would run:
kubectl apply -k github.com/kubernetes-sigs/cluster-api-provider-aws/config/crd/
Create a Cluster YAML Template:
You will define a Cluster
resource first. Below is an example of how you would define a Cluster
YAML for a specific provider.
Cluster YAML (for AWS):
apiVersion: cluster.x-k8s.io/v1alpha4
kind: Cluster
metadata:
name: my-cluster
namespace: default
spec:
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
kind: AWSCluster
name: my-cluster-infra
Create Infrastructure YAML Template:
Define the infrastructure specific to your provider (e.g., for AWS, you will create an AWSCluster
resource). Here’s an example for AWS:
AWSCluster YAML (for AWS):
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
kind: AWSCluster
metadata:
name: my-cluster-infra
namespace: default
spec:
region: us-west-2
controlPlaneConfiguration:
instanceType: t3.medium
nodePools:
- name: worker-node-pool
instanceType: t3.medium
minCount: 2
maxCount: 5
replicas: 3
Create Machine YAML Templates:
Each machine in your cluster will be represented as a Machine
resource. Below is an example of a Machine
resource template:
Machine YAML (for AWS):
apiVersion: cluster.x-k8s.io/v1alpha4
kind: Machine
metadata:
name: my-cluster-machine
namespace: default
spec:
bootstrap:
configRef:
name: my-cluster-bootstrap
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
kind: AWSMachine
name: my-cluster-machine-infra
Configure KubeadmControlPlane (for Kubeadm-based providers):
If you are using a Kubeadm-based control plane (e.g., for AWS), you will define a KubeadmControlPlane
resource, which defines the number of control plane nodes and configuration.
KubeadmControlPlane YAML (for AWS):
apiVersion: controlplane.cluster.x-k8s.io/v1alpha4
kind: KubeadmControlPlane
metadata:
name: my-cluster-control-plane
namespace: default
spec:
version: v1.24.0
replicas: 3
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
kind: AWSMachine
name: my-cluster-control-plane-infra
Generate MachineSet or MachineDeployment:
Optionally, you can define MachineSet
or MachineDeployment
if you want to scale machines. This is useful for scaling your worker nodes.
MachineSet YAML (for AWS):
apiVersion: cluster.x-k8s.io/v1alpha4
kind: MachineSet
metadata:
name: my-cluster-machineset
namespace: default
spec:
replicas: 3
selector:
matchLabels:
cluster.x-k8s.io/cluster-name: my-cluster
template:
metadata:
labels:
cluster.x-k8s.io/cluster-name: my-cluster
spec:
bootstrap:
configRef:
name: my-cluster-bootstrap
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
kind: AWSMachine
name: my-cluster-machine-infra
After you have your YAML templates defined, apply them to your Kubernetes cluster using kubectl
:
kubectl apply -f cluster.yaml
kubectl apply -f awscluster.yaml
kubectl apply -f machine.yaml
kubectl apply -f kubeadmcontrolplane.yaml
Machine
or MachineSet
) might have specific configurations, such as bootstrap scripts, specific machine types, or cloud-provider-specific fields.More steps will be described further.
Useful resources: Kubernetes Cluster API Quick Start Kubernetes Cluster API Provider AWS Creating a EKS cluster
export AWS_ACCESS_KEY_ID="<KEY-ID>"
export AWS_SECRET_ACCESS_KEY="<ACCESS-KEY>"
export AWS_SESSION_TOKEN="<TOKEN>"
export AWS_REGION=<REGION>
export AWS_B64ENCODED_CREDENTIALS=$(./bin/clusterawsadm bootstrap credentials encode-as-profile)
export AWS_NODE_MACHINE_TYPE=<MACHINE-TYPE-SIZE>
export AWS_SSH_KEY_NAME=<SSK_KEY-NAME>
export KUBERNETES_VERSION=<VERSION>
clusterctl init --infrastructure aws
clusterctl generate cluster managed-test --flavor eks > capi-eks.yaml
CAPI definitions (example) w/o templates:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: managed-test
namespace: default
spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1beta2
kind: AWSManagedControlPlane
name: managed-test-control-plane
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSManagedCluster
name: managed-test
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSManagedCluster
metadata:
name: managed-test
namespace: default
spec: {}
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta2
kind: AWSManagedControlPlane
metadata:
name: managed-test-control-plane
namespace: default
spec:
region: us-east-2
sshKeyName: slysunkin
version: 1.30.0
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
name: managed-test-md-0
namespace: default
spec:
clusterName: managed-test
replicas: 0
selector:
matchLabels: null
template:
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta2
kind: EKSConfigTemplate
name: managed-test-md-0
clusterName: managed-test
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSMachineTemplate
name: managed-test-md-0
version: 1.30.0
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSMachineTemplate
metadata:
name: managed-test-md-0
namespace: default
spec:
template:
spec:
iamInstanceProfile: nodes.cluster-api-provider-aws.sigs.k8s.io
instanceType: t3.small
sshKeyName: slysunkin
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta2
kind: EKSConfigTemplate
metadata:
name: managed-test-md-0
namespace: default
spec:
template: {}
Role definitions:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: capa-eks-control-plane-system-capa-eks-control-plane-manager-role
labels:
cluster.x-k8s.io/provider: control-plane-aws-eks
clusterctl.cluster.x-k8s.io: ''
rules:
- verbs:
- create
- delete
- get
- list
- patch
- update
- watch
apiGroups:
- ''
resources:
- secrets
- verbs:
- get
- list
- watch
apiGroups:
- cluster.x-k8s.io
resources:
- clusters
- clusters/status
- machinedeployments
- machinedeployments/status
- verbs:
- create
- delete
- get
- list
- patch
- update
- watch
apiGroups:
- controlplane.cluster.x-k8s.io
resources:
- awsmanagedcontrolplanes
- verbs:
- get
- patch
- update
apiGroups:
- controlplane.cluster.x-k8s.io
resources:
- awsmanagedcontrolplanes/status
- verbs:
- get
- list
- watch
apiGroups:
- infrastructure.cluster.x-k8s.io
resources:
- awsmanagedclusters
- awsmanagedclusters/status
- awsmachinetemplates
- awsmachinetemplates/status
To be continued...
@bnallapeta , I've added a couple of articles on CAPI provider development
[!IMPORTANT] The corner stone of CAPI provider templates is the selection of control plane. In case of EKS (or GCP, for example) the original control plane should be retained (AWSManagedControlPlane for EKS, GCPManagedControlPlane for GCP). For AWS, Azure, etc. - K0s or K0smotron control plane should be used.
Generated YAML for EKS could be converted into 2A template in a straightforward way:
templates/cluster/aws-eks
├── Chart.yaml
├── templates
│ ├── _helpers.tpl
│ ├── awsmachinetemplate-worker.yaml
│ ├── awsmanagedcluster.yaml
│ ├── awsmanagedcontrolplane.yaml
│ ├── cluster.yaml
│ ├── eksconfigtemplate.yaml
│ └── machinedeployment.yaml
├── values.schema.json
└── values.yaml
After applying templated values on all cluster object definitions a new template is pretty much ready.
To be continued...
Create an instruction for adding provider templates to Project 2A: