siderolabs / cluster-api-bootstrap-provider-talos

A cluster-api bootstrap provider for deploying Talos clusters.
https://www.talos-systems.com
Mozilla Public License 2.0
102 stars 27 forks source link

Cluster API metadata propagation #172

Open galiev opened 1 year ago

galiev commented 1 year ago

Could you help me please. Should cluster-api metadata propagation work with cluster-api-bootstrap-provider-talos, or should I define node labels in the config?

versions used:

NAME                     NAMESPACE                       TYPE                     CURRENT VERSION   NEXT VERSION
bootstrap-kubeadm        capi-kubeadm-bootstrap-system   BootstrapProvider        v1.5.0            Already up to date
bootstrap-talos          cabpt-system                    BootstrapProvider        v0.6.0            Already up to date
control-plane-talos      cacppt-system                   ControlPlaneProvider     v0.5.1            Already up to date
cluster-api              capi-system                     CoreProvider             v1.5.0            Already up to date
infrastructure-hetzner   caph-system                     InfrastructureProvider   v1.0.0-beta.18    Already up to date

my MachineDeployment:

apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
  labels:
    cluster.x-k8s.io/cluster-name: hil2-cluster
    nodepool: hil2-md-0
  name: hil2-capi-md-0
  namespace: hil2
spec:
  clusterName: hil2-cluster
  minReadySeconds: 0
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 1
  selector:
    matchLabels:
      cluster.x-k8s.io/cluster-name: hil2-cluster
      cluster.x-k8s.io/deployment-name: hil2-capi-md-0
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      labels:
        cluster.x-k8s.io/cluster-name: hil2-cluster
        cluster.x-k8s.io/deployment-name: hil2-capi-md-0
        node-role.kubernetes.io/worker: nodepool-md0
    spec:
      bootstrap:
        configRef:
          apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
          kind: TalosConfigTemplate
          name: hil2-capi-md-0
      clusterName: hil2-cluster
      failureDomain: hil
      infrastructureRef:
        apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
        kind: HCloudMachineTemplate
        name: hil2-capihcmt-md-0
      version: v1.27.2

result:

  kubectl get nodes --show-labels
NAME                                STATUS   ROLES           AGE     VERSION   LABELS
hil2-capihcmt-control-plane-pdjpn   Ready    control-plane   3m36s   v1.27.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=hil2-capihcmt-control-plane-pdjpn,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=
hil2-capihcmt-md-0-ndm2b            Ready    <none>          3m39s   v1.27.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=hil2-capihcmt-md-0-ndm2b,kubernetes.io/os=linux

unfortunately the label node-role.kubernetes.io/worker=nodepool-md0 is not added

smira commented 1 year ago

Yes, at the moment it needs to be defined as a config patch for a machine config. CABPT doesn't have any integration with CAPI metadata, but it might be a nice feature.