hetznercloud / hcloud-cloud-controller-manager

Kubernetes cloud-controller-manager for Hetzner Cloud
Apache License 2.0
740 stars 118 forks source link

CCM installation with microk8s #664

Closed telemorphix closed 1 day ago

telemorphix commented 5 months ago

TL;DR

Hi. I'm currently installing CCM with microk8s. The installation steps are not comparable to the current installation manual (e.g. kubeadm not available). For a simplified Kubernetes installation based on Ubuntu (microK8s), such an official manual would be very helpful, because I got stuck with the installation (error below).

Are there any plans to include a microk8s installation manual, or do you have such manual available. Thanks.

Expected behavior

The installation of CCM with microk8s works fine so far.

Environment:

I followed these steps:

With the yaml file below, the service is create and I can see the new load balancer in the Hetzner Cloud panel, but the service creates an error: Warning SyncLoadBalancerFailed 4s (x2 over 9s) service-controller Error syncing load balancer: failed to ensure load balancer: hcloud/loadBalancers.EnsureLoadBalancer: hcops/LoadBalancerOps.ReconcileHCLBTargets: providerID does not have one of the the expected prefixes (hcloud://, hrobot://, hcloud://bm-):

I added "--cloud-provider=external" to the:

config files and restarted microk8s, but that did not work out.

I did not configure anything to tolerate the uninitialized taint (yet, would not know where).

yaml

apiVersion: v1 kind: Service metadata: name: balancer-test-service annotations: load-balancer.hetzner.cloud/location: fsn1 spec: selector: app: test-apps ports:

apricote commented 5 months ago

Hey @telemorphix,

you need to add --cloud-provider=external from the start to the kubelet, otherwise the node will be initialized with the microk8s internal "cloud provider" and have the wrong provider ID.

You can verify this by running kubectl get nodes -o=custom-columns=NAME:.metadata.name,PROVIDERID:.spec.providerID.

Docs

We currently only document the required parameters for kubeadm, as its the primary method supported by upstream kubernetes. We are planning a rewrite of our docs and Ill make sure to make space for other Kubernetes Distributions. We do not have the bandwidth to support & test every distribution though, so this will rely on the community providing the documentations for each distribution.

A quick look into the microk8s docs tells me, that you can configure this through "Launch Configurations", where you can add --cloud-provider: external under extraKubeletArgs.

apricote commented 2 months ago

Hey @telemorphix,

were you able to get HCCM working with the suggested changes?

jtackaberry commented 1 month ago

@apricote can't speak for the OP, but I was able to get it working with microk8s. Thanks!

telemorphix commented 1 month ago

Yes, got it running with mircoK8s. Thanks @apricote. Sorry for the late response.

My Kafka cluster (3 nodes) is running for the last 60 days without any issues incl. Hetzner load balancer, ingress, cert-manager and Hetzner CSI driver for persistent storage.

rustomax commented 1 month ago

This bugged me so much I actually wrote a blog article on how to do this end-to-end. Hope it will help someone.

@apricote thanks for the nifty command! Quite useful for getting to the interesting fields. Included in the post.

kubectl get nodes -o=custom-columns=NAME:.metadata.name,PROVIDERID:.spec.providerID
jtackaberry commented 1 month ago

This bugged me so much I actually wrote a blog article on how to do this end-to-end. Hope it will help someone.

Apart from the subject of this issue, which was an easy enough fix, the only real challenge I faced with a microk8s cluster on Hetzner was dual stack support. I see from your blog post you decided not to travel those waters. :)

Hetzner allocates a /64 per server, so the approach I ended up taking was to create an IPPool per node, with a nodeSelector that targets the specific node. Pods that land on a given node will be allocated a v6 address from that node's /64 by Calico's IPAM.

The main question beyond that was what to use for the IPv6 cluster CIDR, since K8s requires these to be within a contiguous prefix. Here I just ended up using the /48 from the node's assigned prefix, because Hetzner appears to allocate a /48 per region, and I don't intend to deploy nodes in the same cluster across regions. Using the /48 for the cluster CIDR is a smell to be sure, but since IP allocation is taken care of Calico IPAM, the only other relevant area I've seen the v6 cluster CIDR in use is by kube-proxy where it manages iptables rules that apply only to egress traffic coming from the pods destined for v6 K8s services, so using the /48 there appears to be benign. (If anyone is aware of dragons lurking this area I'd appreciate the heads up.)

I have Ansible bootstrapping all this. It's fine as far as it goes, though I'm not a huge fan of Ansible (since Ansible is the worst possible option for config management, except for everything else).

rustomax commented 1 month ago

@jtackaberry yes your mileage seems to have varied from mine :) Thanks for sharing, this is good stuff to be aware of.

apricote commented 1 day ago

I will close the issue as its working for everyone now.