Closed Syntax3rror404 closed 1 month ago
Got it, the san is for the cluster and ! for the machine talos endpoint.
Is KubePrism needed? Or is this only needed for Talos endpoint API HA?
The kubeapi is via DNS HA?
@Syntax3rror404
My views:
I suggest you look at setting up your cluster with a floating VIP, which is controlled with etcd elections: https://www.talos.dev/v1.7/talos-guides/network/vip/
You should use KubePrism, yes, it's enabled by default. I wouldn't disable it if I were you. It's recommended you use this for Cilium in particular. See https://www.talos.dev/v1.7/kubernetes-guides/configuration/kubeprism/
For talosctl, you don't need to worry about load balancing. Your talosconfig file will contain all your control plane node IPs and I quote from the documentation: The talosctl tool provides built-in client-side load-balancing across control plane nodes, so usually you do not need to configure a load balancer for the Talos API.
Steps:
export TALOSCONFIG="_out/talosconfig"
talosctl config endpoint $CONTROL_PLANE_IP
talosctl config node $CONTROL_PLANE_IP
talosctl bootstrap
and wait a little for bootingtalosctl kubeconfig .
and then cp kubeconfig ~/.kube/config
I recommend Method 1 because Cilium has moved from supporting CLI based installer and Helm to just Helm installs with the Helm install being done by the CLI if you did use the CLI. Also, I note when you use methods 2, 3, or 4 that Cilium doesn't show up as an installed Helm package which would make it annoying to handle an upgrade. 5 is new to me and might work well if on the right Cilium version where it uses Helm underneath for the CLI installer.
@ccben87
Wow ok this is extremly cool ....... Got it working now with VIP.
I use Rancher on top of Talos. Rancher is not that fast with updating to the newest kubernetes version.
Do you have some recommendation about updating the os and kubernetes?
Do you have some recommendation about updating the os and kubernetes?
I would recommend you update Talos when a new stable version is out. Maybe don't update to the first major version and wait for a few minors before updating to the major version.
With regards to Kubernetes, that's more of a complicated question and will depend on your workloads and if there are any features / fixes you want in the newer versions. You should test your workloads in non-prod (which should be as identical to prod as you can make it) and if they work there then feel free to upgrade as new releases come. Otherwise, you'll need to pay attention to whether your workloads will work with the new Kubernetes version. A good example of why you should be cautious with your workloads is with Postgres around this https://www.linkedin.com/pulse/kubernetes-silent-pod-killer-invisible-oom-kill-containers-secondary-gccle where you may find you may run into problems if your Postgres workload generates an OOM due to your limits for the workload in K8s.
Usually it's a good idea to adopt an N-1 update posture for most updates for most software.
Bug Report
Im coming from Rancher with the rke2 engine. This is my first talos installation and nothing works. I working now more then 5 years with kubernetes in a enterprise environment. So what im missing? Or is this a bug?
I basicly want a simple HA Kubernetes setup without kube-proxy and kube-proxy replacment from cilium. Which sould be possible: https://www.talos.dev/v1.7/kubernetes-guides/network/deploying-cilium/
Description
Ok back to the basics. I have a DNS entry to the 3 master nodes. This master nodes should be the control plane with the api server.
So the gen command should be:
talosctl gen config talos https://mgm.talos-ha.lab:6443 --config-patch "@patch.yaml"
The next step is to push to the init master node the configuration.
talosctl apply-config --insecure --nodes 192.168.35.60 --file controlplane.yaml
The master node not gets ready. But this should be ok because the cluster first needs to be cilium installed to get the coreDNS up and running.
Then I bootstrap with the ha endpoint:
Interesting but the machineconfig from the controlplane has a san
Ok ... then with the IP from the first node?
talosctl bootstrap --nodes 192.168.35.60
Oh exit code 0 ok cool.
Ok, talos can I now have please the kubeconfig to install cilium?
Hmm.. ok and again with the first controlplane ip?
Ok. So I have now idea what this is? Is this a bug?
I dont have a kubeconfig or anything. The cluster hangs in the air.
I tryed it several times again and again with the same end.
And I have a other question Do I need KubePrism == true? I want to use the cilium L4 Load Balancer.
In this example KubePrism is true.
Dashboard
Environment