k0sproject / k0smotron

k0smotron
https://docs.k0smotron.io/
Other
423 stars 39 forks source link

how to use clusterapi provider to deploy multiple clusters #633

Closed xinity closed 5 days ago

xinity commented 1 week ago

hello,

I've tried to use k0smotron with capi to deploy multiple capd clusters. First of all:

i've tried several configuration but none were working, so i'm wondering IF i can really use k0smotron+capi+capd to deploy multiple clusters at once with control plane in pods ?

makhov commented 1 week ago

Of course, you can. You can specify spec.service.apiPort and spec.service.konnectivityPort fields to change the default node ports: https://docs.k0smotron.io/stable/resource-reference/#k0smotroncontrolplanespecservice

A very long full example ```yaml # The first cluster: apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: first-docker-test namespace: default spec: clusterNetwork: pods: cidrBlocks: - 192.168.0.0/16 serviceDomain: cluster.local services: cidrBlocks: - 10.128.0.0/12 controlPlaneRef: apiVersion: controlplane.cluster.x-k8s.io/v1beta1 kind: K0smotronControlPlane name: first-docker-test-cp namespace: default infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: DockerCluster name: first-docker-test namespace: default --- apiVersion: controlplane.cluster.x-k8s.io/v1beta1 kind: K0smotronControlPlane # This is the config for the controlplane metadata: name: first-docker-test-cp namespace: default spec: version: v1.27.2-k0s.0 persistence: type: emptyDir service: type: NodePort apiPort: 30443 konnectivityPort: 30132 --- apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: DockerCluster metadata: name: first-docker-test namespace: default annotations: cluster.x-k8s.io/managed-by: k0smotron # This marks the base infra to be self managed. The value of the annotation is irrelevant, as long as there is a value. spec: {} # More details of the DockerCluster can be set here --- apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineDeployment metadata: name: docker-test-md namespace: default spec: clusterName: first-docker-test replicas: 1 selector: matchLabels: cluster.x-k8s.io/cluster-name: first-docker-test pool: worker-pool-1 template: metadata: labels: cluster.x-k8s.io/cluster-name: first-docker-test pool: worker-pool-1 spec: clusterName: first-docker-test version: v1.27.2 # Docker Provider requires a version to be set (see https://hub.docker.com/r/kindest/node/tags) bootstrap: configRef: apiVersion: bootstrap.cluster.x-k8s.io/v1beta1 kind: K0sWorkerConfigTemplate name: first-docker-test-machine-config infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: DockerMachineTemplate name: first-docker-test-mt --- apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: DockerMachineTemplate metadata: name: first-docker-test-mt namespace: default spec: template: spec: {} # More details of the DockerMachineTemplate can be set here --- apiVersion: bootstrap.cluster.x-k8s.io/v1beta1 kind: K0sWorkerConfigTemplate metadata: name: first-docker-test-machine-config spec: template: spec: version: v1.27.2+k0s.0 # More details of the worker configuration can be set here --- apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: second-docker-test namespace: default spec: clusterNetwork: pods: cidrBlocks: - 192.168.0.0/16 serviceDomain: cluster.local services: cidrBlocks: - 10.128.0.0/12 controlPlaneRef: apiVersion: controlplane.cluster.x-k8s.io/v1beta1 kind: K0smotronControlPlane name: second-docker-test-cp namespace: default infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: DockerCluster name: second-docker-test namespace: default --- apiVersion: controlplane.cluster.x-k8s.io/v1beta1 kind: K0smotronControlPlane # This is the config for the controlplane metadata: name: second-docker-test-cp namespace: default spec: version: v1.27.2-k0s.0 persistence: type: emptyDir service: type: NodePort apiPort: 30773 konnectivityPort: 30232 --- apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: DockerCluster metadata: name: second-docker-test namespace: default annotations: cluster.x-k8s.io/managed-by: k0smotron spec: {} --- apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineDeployment metadata: name: second-docker-test-md namespace: default spec: clusterName: second-docker-test replicas: 1 selector: matchLabels: cluster.x-k8s.io/cluster-name: second-docker-test pool: worker-pool-1 template: metadata: labels: cluster.x-k8s.io/cluster-name: second-docker-test pool: worker-pool-1 spec: clusterName: second-docker-test version: v1.27.2 # Docker Provider requires a version to be set (see https://hub.docker.com/r/kindest/node/tags) bootstrap: configRef: apiVersion: bootstrap.cluster.x-k8s.io/v1beta1 kind: K0sWorkerConfigTemplate name: second-docker-test-machine-config infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: DockerMachineTemplate name: second-docker-test-mt --- apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: DockerMachineTemplate metadata: name: second-docker-test-mt namespace: default spec: template: spec: {} --- apiVersion: bootstrap.cluster.x-k8s.io/v1beta1 kind: K0sWorkerConfigTemplate metadata: name: second-docker-test-machine-config spec: template: spec: version: v1.27.2+k0s.0 --- # The second cluster: apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: second-docker-test namespace: default spec: clusterNetwork: pods: cidrBlocks: - 192.168.0.0/16 serviceDomain: cluster.local services: cidrBlocks: - 10.128.0.0/12 controlPlaneRef: apiVersion: controlplane.cluster.x-k8s.io/v1beta1 kind: K0smotronControlPlane name: second-docker-test-cp namespace: default infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: DockerCluster name: second-docker-test namespace: default --- apiVersion: controlplane.cluster.x-k8s.io/v1beta1 kind: K0smotronControlPlane # This is the config for the controlplane metadata: name: second-docker-test-cp namespace: default spec: version: v1.27.2-k0s.0 persistence: type: emptyDir service: type: NodePort apiPort: 30773 konnectivityPort: 30232 --- apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: DockerCluster metadata: name: second-docker-test namespace: default annotations: cluster.x-k8s.io/managed-by: k0smotron spec: {} --- apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineDeployment metadata: name: second-docker-test-md namespace: default spec: clusterName: second-docker-test replicas: 1 selector: matchLabels: cluster.x-k8s.io/cluster-name: second-docker-test pool: worker-pool-1 template: metadata: labels: cluster.x-k8s.io/cluster-name: second-docker-test pool: worker-pool-1 spec: clusterName: second-docker-test version: v1.27.2 # Docker Provider requires a version to be set (see https://hub.docker.com/r/kindest/node/tags) bootstrap: configRef: apiVersion: bootstrap.cluster.x-k8s.io/v1beta1 kind: K0sWorkerConfigTemplate name: second-docker-test-machine-config infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: DockerMachineTemplate name: second-docker-test-mt --- apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: DockerMachineTemplate metadata: name: second-docker-test-mt namespace: default spec: template: spec: {} --- apiVersion: bootstrap.cluster.x-k8s.io/v1beta1 kind: K0sWorkerConfigTemplate metadata: name: second-docker-test-machine-config spec: template: spec: version: v1.27.2+k0s.0 ```
xinity commented 1 week ago

Understood because, i haven't configured the konnectivityPort it was blocking me from having a second CP in pods :(

Just tried it out and it didn't work @makhov :( update kuberouter keep crashing because of failed readiness :( i was looking to switch to calico but haven't found in the documentation how to do it properly

Might be a good idea to document that , what do you think @makhov ? should i PR ?

makhov commented 1 week ago

Yeah, it makes a lot of sense. Feel free to send a PR.

xinity commented 1 week ago

any hints about why kuberouter keeps crashing, complaining about failed readiness probe @makhov ?

makhov commented 5 days ago

Yeah, it looks like it's due to the bug fixed in #638. It should be fixed in k0smotron v1.0.1. Feel free to re-open the issue, if the problem still exists