Closed FischerLGLN closed 2 years ago
@FischerLGLN Thank you for your issue. Only to clarify: You mean an additional loadbalancer to use ingress-nginx?
Right now the created loadbalancer is necessary because the ccm is not available at cluster creation time. It's also only there to serve the kube-apiserver and has only the control-plane nodes as a target. Therefore every additional loadbalancer to consume in a k8s service is managed by the cloud-controller-manager .
@batistein Thanks for the quick response. Yes, the first control plane loadbalancer forwards from 443 to 6443, therefore blocking port 443. That's why I need another one for ingress-nginx or let it's 443 service listen on another port.
Following your link, is this the reason for deciding on Cilium?
The CNI plugin you use needs to support this k8s native functionality (Cilium does it, I don't know about Calico & WeaveNet), so basically you use the Hetzner Cloud Networks as the underlying networking stack.
Calico with eBPF works fine too, at least if not using cloud-controller-manager.
helm repo add projectcalico https://projectcalico.docs.tigera.io/charts
helm install calico projectcalico/tigera-operator --version v3.22.0
If I use cloud-controller-manager in contrast to terraform, I get the benefit that changes in Hetzner Console also affect the underlying Kubernetes, right?
@FischerLGLN you could change the port to anything you want by changing in HetznerCluster spec.controlPlaneEndpoint.port to what you want. It's only important to know that you cannot update this port in a running cluster. Also FYI you could use the host for specifing a dns name which needs to point to the created loadbalancer.
There is no decision to use only cilium you could use whatever CNI you want. Cilium in my experience is only the best option at the moment as you could also run it kube-proxy free and use only eBPF and no iptables. But of course that's only my opinion.
To use or not to use cloud controller manager is nothing we could really decide as this is mandatory for running kubernetes on hcloud.
@batistein ah okay, so I would have to power down the node and make the service-changes then. About the dns name, I did something like that on openstack with k3s using tls-san on the master-node pointing to my ports 80/443 loadbalancer. Yes, Cilium has the best performance (with a higher load). Okay great, I am closing this issue now. Thank you for the technical exchange!
Hi, I tried it the next day and autoprovisioning of the loadbalancer for the ingress-nginx worked! After installing ccm in private network mode, I had to annotate the nginx loadbalancer to provide a cloud loadbalancer for example in Falkenstein.
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace --set 'controller.service.annotations.load-balancer\.hetzner\.cloud/location=fsn1'
/kind feature
Describe the solution you'd like [A clear and concise description of what you want to happen.] Hi, I've been following this tutorial and have created manually a loadbalancer with two service pointing to both ports 80 and 443 with ssl-passthrough, because ingress-nginx should be terminating that.
Is it possible to allow the creation of a loadbalancer additionally to the already provisioned controlPlaneLoadBalancer?
Do I need a regular floating ip that ingress-nginx is able to pick that up or use something like metallb for multiple ones?
Thanks in advance!
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.] I'll definitely try out your tilt setup next
https://github.com/syself/cluster-api-provider-hetzner/blob/main/docs/developers/development.md#tilt-for-dev-in-caph to be able to incorporate your changes sooner.
Environment:
kubectl version
) 1.23.3/etc/os-release
): fedora-35