vitobotta / hetzner-k3s

The easiest and fastest way to create and manage Kubernetes clusters in Hetzner Cloud using the lightweight distribution k3s by Rancher.
MIT License
1.9k stars 141 forks source link

2.0.3 - Unhandled exception (timeout) #420

Closed ThePlay3r closed 2 months ago

ThePlay3r commented 2 months ago

Hey, I'm trying to upgrade from 1.1.5 to the new 2.0.3.

I tried to create a simple cluster in a new Hetzner project, but the create command gets stuck and throws unexpected exception.

Config:

hetzner_token: <>
cluster_name: c5r-02-eu-central
kubeconfig_path: "./kubeconfig"
k3s_version: v1.29.0+k3s1

networking:
  ssh:
    port: 22
    use_agent: false # set to true if your key has a passphrase
    public_key_path: "~/.ssh/id_ed25519.pub"
    private_key_path: "~/.ssh/id_ed25519"
  allowed_networks:
    ssh:
      - 0.0.0.0/0
    api:
      - 0.0.0.0/0
  public_network:
    ipv4: true
    ipv6: true
  private_network:
    enabled : true
    subnet: 10.0.0.0/16
    existing_network_name: ""
  cni:
    enabled: true
    encryption: false
    mode: flannel
  # cluster_cidr: 10.244.0.0/16 # optional: a custom IPv4/IPv6 network CIDR to use for pod IPs
  # service_cidr: 10.43.0.0/16 # optional: a custom IPv4/IPv6 network CIDR to use for service IPs. Warning, if you change this, you should also change cluster_dns!
  # cluster_dns: 10.43.0.10 # optional: IPv4 Cluster IP for coredns service. Needs to be an address from the service_cidr range

# manifests:
#   cloud_controller_manager_manifest_url: "https://github.com/hetznercloud/hcloud-cloud-controller-manager/releases/download/v1.20.0/ccm-networks.yaml"
#   csi_driver_manifest_url: "https://raw.githubusercontent.com/hetznercloud/csi-driver/v2.8.0/deploy/kubernetes/hcloud-csi.yml"
#   system_upgrade_controller_deployment_manifest_url: "https://github.com/rancher/system-upgrade-controller/releases/download/v0.13.4/system-upgrade-controller.yaml"
#   system_upgrade_controller_crd_manifest_url: "https://github.com/rancher/system-upgrade-controller/releases/download/v0.13.4/crd.yaml"
#   cluster_autoscaler_manifest_url: "https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/hetzner/examples/cluster-autoscaler-run-on-master.yaml"

datastore:
  mode: etcd # etcd (default) or external
  external_datastore_endpoint: postgres://....

schedule_workloads_on_masters: false

# image: rocky-9 # optional: default is ubuntu-22.04
# autoscaling_image: 103908130 # optional, defaults to the `image` setting
# snapshot_os: microos # optional: specified the os type when using a custom snapshot

masters_pool:
  instance_type: cpx31
  instance_count: 3
  location: fsn1

worker_node_pools:
  - name: small-dv-fsn1
    instance_type: ccx23
    instance_count: 2
    location: fsn1
  - name: small-dv-nbg1
    instance_type: ccx23
    instance_count: 2
    location: nbg1
    # image: debian-11
    # labels:
    #   - key: purpose
    #     value: blah
    # taints:
    #   - key: something
    #     value: value1:NoSchedule
#  - name: medium-autoscaled
#    instance_type: cpx31
#    instance_count: 2
#    location: fsn1
#    autoscaling:
#      enabled: true
#      min_instances: 0
#      max_instances: 3

embedded_registry_mirror:
  enabled: true

# additional_packages:
# - somepackage

# post_create_commands:
# - apt update
# - apt upgrade -y
# - apt autoremove -y

# kube_api_server_args:
# - arg1
# - ...
# kube_scheduler_args:
# - arg1
# - ...
# kube_controller_manager_args:
# - arg1
# - ...
# kube_cloud_controller_manager_args:
# - arg1
# - ...
# kubelet_args:
# - arg1
# - ...
# kube_proxy_args:
# - arg1
# - ...
# api_server_hostname: k8s.example.com # optional: DNS for the k8s API LoadBalancer. After the script has run, create a DNS record with the address of the API LoadBalancer.

Logs:

[Configuration] Validating configuration...
[Configuration] ...configuration seems valid.
[Private Network] Private network already exists, skipping create
[SSH key] Creating SSH key...
[SSH key] ...SSH key created
[Placement groups] Deleting unused placement group c5r-02-eu-central-masters...
[Placement groups] ...placement group c5r-02-eu-central-masters deleted
[Placement groups] Deleting unused placement group c5r-02-eu-central-small-dv-fsn1-2...
[Placement groups] ...placement group c5r-02-eu-central-small-dv-fsn1-2 deleted
[Placement groups] Deleting unused placement group c5r-02-eu-central-small-dv-nbg1-2...
[Placement groups] ...placement group c5r-02-eu-central-small-dv-nbg1-2 deleted
[Placement groups] Creating placement group c5r-02-eu-central-masters...
[Placement groups] ...placement group c5r-02-eu-central-masters created
[Placement groups] Creating placement group c5r-02-eu-central-small-dv-nbg1-2...
[Placement groups] Creating placement group c5r-02-eu-central-small-dv-fsn1-2...
[Placement groups] ...placement group c5r-02-eu-central-small-dv-fsn1-2 created
[Placement groups] ...placement group c5r-02-eu-central-small-dv-nbg1-2 created
[Instance c5r-02-eu-central-master1] Creating instance c5r-02-eu-central-master1 (attempt 1)...
[Instance c5r-02-eu-central-master2] Creating instance c5r-02-eu-central-master2 (attempt 1)...
[Instance c5r-02-eu-central-master3] Creating instance c5r-02-eu-central-master3 (attempt 1)...
[Instance c5r-02-eu-central-master1] Instance status: off
[Instance c5r-02-eu-central-master1] Powering on instance (attempt 1)
[Instance c5r-02-eu-central-master1] Waiting for instance to be powered on...
[Instance c5r-02-eu-central-master2] Instance status: off
[Instance c5r-02-eu-central-master2] Powering on instance (attempt 1)
[Instance c5r-02-eu-central-master2] Waiting for instance to be powered on...
[Instance c5r-02-eu-central-master3] Instance status: off
[Instance c5r-02-eu-central-master3] Powering on instance (attempt 1)
[Instance c5r-02-eu-central-master3] Waiting for instance to be powered on...
[Instance c5r-02-eu-central-master1] Instance status: running
[Instance c5r-02-eu-central-master2] Instance status: running
[Instance c5r-02-eu-central-master3] Instance status: running
[Instance c5r-02-eu-central-master1] Waiting for successful ssh connectivity with instance c5r-02-eu-central-master1...
[Instance c5r-02-eu-central-master2] Waiting for successful ssh connectivity with instance c5r-02-eu-central-master2...
[Instance c5r-02-eu-central-master3] Waiting for successful ssh connectivity with instance c5r-02-eu-central-master3...
[Instance c5r-02-eu-central-master1] ...instance c5r-02-eu-central-master1 is now up.
[Instance c5r-02-eu-central-master1] ...instance c5r-02-eu-central-master1 created
[Instance c5r-02-eu-central-master2] ...instance c5r-02-eu-central-master2 is now up.
[Instance c5r-02-eu-central-master2] ...instance c5r-02-eu-central-master2 created
[Instance c5r-02-eu-central-master3] ...instance c5r-02-eu-central-master3 is now up.
[Instance c5r-02-eu-central-master3] ...instance c5r-02-eu-central-master3 created
[Firewall] Updating firewall...
[Firewall] ...firewall updated
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker2] Creating instance c5r-02-eu-central-pool-small-dv-fsn1-worker2 (attempt 1)...
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker1] Creating instance c5r-02-eu-central-pool-small-dv-nbg1-worker1 (attempt 1)...
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker1] Creating instance c5r-02-eu-central-pool-small-dv-fsn1-worker1 (attempt 1)...
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker2] Creating instance c5r-02-eu-central-pool-small-dv-nbg1-worker2 (attempt 1)...
[Instance c5r-02-eu-central-master1] 🕒 Awaiting cloud config (may take a minute...)
[Instance c5r-02-eu-central-master1] Cloud init finished: 21.84 - Thu, 22 Aug 2024 19:20:57 +0000 - v. 24.1.3-0ubuntu3.3
[Instance c5r-02-eu-central-master1] [INFO]  Using v1.29.0+k3s1 as release
[Instance c5r-02-eu-central-master1] [INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.29.0+k3s1/sha256sum-amd64.txt
[Instance c5r-02-eu-central-master1] [INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.29.0+k3s1/k3s
[Instance c5r-02-eu-central-master1] [INFO]  Verifying binary download
[Instance c5r-02-eu-central-master1] [INFO]  Installing k3s to /usr/local/bin/k3s
[Instance c5r-02-eu-central-master1] [INFO]  Skipping installation of SELinux RPM
[Instance c5r-02-eu-central-master1] [INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[Instance c5r-02-eu-central-master1] [INFO]  Creating /usr/local/bin/crictl symlink to k3s
[Instance c5r-02-eu-central-master1] [INFO]  Creating /usr/local/bin/ctr symlink to k3s
[Instance c5r-02-eu-central-master1] [INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[Instance c5r-02-eu-central-master1] [INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[Instance c5r-02-eu-central-master1] [INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[Instance c5r-02-eu-central-master1] [INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[Instance c5r-02-eu-central-master1] [INFO]  systemd: Enabling k3s unit
[Instance c5r-02-eu-central-master1] [INFO]  systemd: Starting k3s
[Instance c5r-02-eu-central-master1] Waiting for the control plane to be ready...
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker1] Instance status: off
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker1] Powering on instance (attempt 1)
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker1] Instance status: off
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker1] Powering on instance (attempt 1)
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker1] Waiting for instance to be powered on...
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker1] Waiting for instance to be powered on...
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker2] Instance status: off
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker2] Powering on instance (attempt 1)
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker2] Waiting for instance to be powered on...
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker2] Instance status: off
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker2] Powering on instance (attempt 1)
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker2] Waiting for instance to be powered on...
[Control plane] Generating the kubeconfig file to /home/pljr/hetzner-k3s/c5r-02-eu-central/kubeconfig...
error: no context exists with the name: "c5r-02-eu-central-master1"
[Control plane] ...kubeconfig file generated as /home/pljr/hetzner-k3s/c5r-02-eu-central/kubeconfig.
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker1] Instance status: running
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker1] Instance status: running
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker2] Instance status: running
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker2] Instance status: running
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker1] Waiting for successful ssh connectivity with instance c5r-02-eu-central-pool-small-dv-nbg1-worker1...
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker1] Waiting for successful ssh connectivity with instance c5r-02-eu-central-pool-small-dv-fsn1-worker1...
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker2] Waiting for successful ssh connectivity with instance c5r-02-eu-central-pool-small-dv-fsn1-worker2...
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker2] Waiting for successful ssh connectivity with instance c5r-02-eu-central-pool-small-dv-nbg1-worker2...
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker1] ...instance c5r-02-eu-central-pool-small-dv-fsn1-worker1 is now up.
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker1] ...instance c5r-02-eu-central-pool-small-dv-fsn1-worker1 created
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker2] ...instance c5r-02-eu-central-pool-small-dv-fsn1-worker2 is now up.
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker2] ...instance c5r-02-eu-central-pool-small-dv-fsn1-worker2 created
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker2] ...instance c5r-02-eu-central-pool-small-dv-nbg1-worker2 is now up.
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker2] ...instance c5r-02-eu-central-pool-small-dv-nbg1-worker2 created
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker1] ...instance c5r-02-eu-central-pool-small-dv-nbg1-worker1 is now up.
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker1] ...instance c5r-02-eu-central-pool-small-dv-nbg1-worker1 created
Unhandled exception in spawn: timeout after 00:00:30 (Tasker::Timeout)
  from /usr/lib/crystal/core/channel.cr:453:10 in 'timeout'
  from /home/runner/work/hetzner-k3s/hetzner-k3s/src/kubernetes/installer.cr:124:7 in 'run'
  from /usr/lib/crystal/core/fiber.cr:143:11 in 'run'
  from ???
vitobotta commented 2 months ago

Hi, did it generate a kubeconfig file? If yes what's the content? (Remember to remove the tokens).

Have you also tried re-running the create command in case it was some temporary API or network glitch?

ThePlay3r commented 2 months ago

I've tried to run the create command multiple times. (Deleting kubeconfig and all servers with each attempt)

The kubeconfig did get generated, this is the exact content:

apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
ThePlay3r commented 2 months ago

I've also tried to run the command again without deleting the servers, same error, same kubeconfig.

[Configuration] Validating configuration...
[Configuration] ...configuration seems valid.
[Private Network] Private network already exists, skipping create
[SSH key] SSH key already exists, skipping create
[Placement groups] Creating placement group c5r-02-eu-central-small-dv-fsn1-4...
[Placement groups] Creating placement group c5r-02-eu-central-small-dv-nbg1-4...
[Placement groups] ...placement group c5r-02-eu-central-small-dv-fsn1-4 created
[Placement groups] ...placement group c5r-02-eu-central-small-dv-nbg1-4 created
[Instance c5r-02-eu-central-master3] Instance c5r-02-eu-central-master3 already exists, skipping create
[Instance c5r-02-eu-central-master1] Instance c5r-02-eu-central-master1 already exists, skipping create
[Instance c5r-02-eu-central-master2] Instance c5r-02-eu-central-master2 already exists, skipping create
[Instance c5r-02-eu-central-master3] Instance status: running
[Instance c5r-02-eu-central-master1] Instance status: running
[Instance c5r-02-eu-central-master2] Instance status: running
[Instance c5r-02-eu-central-master3] Waiting for successful ssh connectivity with instance c5r-02-eu-central-master3...
[Instance c5r-02-eu-central-master1] Waiting for successful ssh connectivity with instance c5r-02-eu-central-master1...
[Instance c5r-02-eu-central-master2] Waiting for successful ssh connectivity with instance c5r-02-eu-central-master2...
[Instance c5r-02-eu-central-master3] ...instance c5r-02-eu-central-master3 is now up.
[Instance c5r-02-eu-central-master2] ...instance c5r-02-eu-central-master2 is now up.
[Instance c5r-02-eu-central-master1] ...instance c5r-02-eu-central-master1 is now up.
[Firewall] Updating firewall...
[Firewall] ...firewall updated
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker2] Instance c5r-02-eu-central-pool-small-dv-fsn1-worker2 already exists, skipping create
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker1] Instance c5r-02-eu-central-pool-small-dv-fsn1-worker1 already exists, skipping create
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker2] Instance c5r-02-eu-central-pool-small-dv-nbg1-worker2 already exists, skipping create
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker1] Instance c5r-02-eu-central-pool-small-dv-nbg1-worker1 already exists, skipping create
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker2] Instance status: running
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker1] Instance status: running
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker2] Instance status: running
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker1] Instance status: running
[Instance c5r-02-eu-central-master3] Cloud init finished: 23.74 - Thu, 22 Aug 2024 22:16:51 +0000 - v. 24.1.3-0ubuntu3.3
[Instance c5r-02-eu-central-master3] [INFO]  Using v1.29.0+k3s1 as release
[Instance c5r-02-eu-central-master3] [INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.29.0+k3s1/sha256sum-amd64.txt
[Instance c5r-02-eu-central-master3] [INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.29.0+k3s1/k3s
[Instance c5r-02-eu-central-master3] [INFO]  Verifying binary download
[Instance c5r-02-eu-central-master3] [INFO]  Installing k3s to /usr/local/bin/k3s
[Instance c5r-02-eu-central-master3] [INFO]  Skipping installation of SELinux RPM
[Instance c5r-02-eu-central-master3] [INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[Instance c5r-02-eu-central-master3] [INFO]  Creating /usr/local/bin/crictl symlink to k3s
[Instance c5r-02-eu-central-master3] [INFO]  Creating /usr/local/bin/ctr symlink to k3s
[Instance c5r-02-eu-central-master3] [INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[Instance c5r-02-eu-central-master3] [INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[Instance c5r-02-eu-central-master3] [INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[Instance c5r-02-eu-central-master3] [INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[Instance c5r-02-eu-central-master3] [INFO]  systemd: Enabling k3s unit
[Instance c5r-02-eu-central-master3] [INFO]  systemd: Starting k3s
[Instance c5r-02-eu-central-master3] Waiting for the control plane to be ready...
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker2] Waiting for successful ssh connectivity with instance c5r-02-eu-central-pool-small-dv-fsn1-worker2...
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker1] Waiting for successful ssh connectivity with instance c5r-02-eu-central-pool-small-dv-fsn1-worker1...
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker2] Waiting for successful ssh connectivity with instance c5r-02-eu-central-pool-small-dv-nbg1-worker2...
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker1] Waiting for successful ssh connectivity with instance c5r-02-eu-central-pool-small-dv-nbg1-worker1...
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker1] ...instance c5r-02-eu-central-pool-small-dv-fsn1-worker1 is now up.
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker2] ...instance c5r-02-eu-central-pool-small-dv-fsn1-worker2 is now up.
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker1] ...instance c5r-02-eu-central-pool-small-dv-nbg1-worker1 is now up.
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker2] ...instance c5r-02-eu-central-pool-small-dv-nbg1-worker2 is now up.
[Control plane] Generating the kubeconfig file to /home/pljr/hetzner-k3s/c5r-02-eu-central/kubeconfig...
error: no context exists with the name: "c5r-02-eu-central-master3"
[Control plane] ...kubeconfig file generated as /home/pljr/hetzner-k3s/c5r-02-eu-central/kubeconfig.
Unhandled exception in spawn: timeout after 00:00:30 (Tasker::Timeout)
  from /usr/lib/crystal/core/channel.cr:453:10 in 'timeout'
  from /home/runner/work/hetzner-k3s/hetzner-k3s/src/kubernetes/installer.cr:124:7 in 'run'
  from /usr/lib/crystal/core/fiber.cr:143:11 in 'run'
  from ???
vitobotta commented 2 months ago

I think the problem is that the k3s version you are using doesn't support embedded registry mirror https://docs.k3s.io/installation/registry-mirror

Try upgrading k3s first. I have on my list to add a validation for this since it can be confusing :)

ThePlay3r commented 2 months ago

I'm using the latest version that I can, I originally tried with 1.30.

Output of hetzner-k3s releases image

vitobotta commented 2 months ago

I'm using the latest version that I can, I originally tried with 1.30.

Output of hetzner-k3s releases image

Can you run rm /tmp/k3s-releases.yaml and then run hetzner-k3s releases again? The releases are cached to prevent issues with the Github rate limiting. You should see that 1.29 is available up to v1.29.7+k3s1.

ThePlay3r commented 2 months ago

I'm using the latest version that I can, I originally tried with 1.30. Output of hetzner-k3s releases image

Can you run rm /tmp/k3s-releases.yaml and then run hetzner-k3s releases again? The releases are cached to prevent issues with the Github rate limiting. You should see that 1.29 is available up to v1.29.7+k3s1.

Thanks, this worked and the command succeeded. However in kubeconfig, I see 3 clusters (each per master-node), is that normal? I would expect only 1 cluster to be created, as was the case in 1.1.5.

vitobotta commented 2 months ago

Yep, it's normal :)

In 1.1.5, when you create an HA cluster it creates a load balancer for the Kubernetes API, and the load balancer distributes requests to the multiple masters in a round robin fashion. There were two issues with that approach though:

  1. The load balancer was an additional cost. I was surprised to see how many people were asking me here or on other channels to remove that requirement to save money; but, more importantly:
  2. Load balancers in Hetzner cloud are still ignored by firewalls, so even if you restricted access for the API to specific network, this restriction would only work on single master clusters without a load balancer, but not on multi-masters clusters due to the usage of a load balancer.

In 2.x, instead of creating a load balancer we generate a composite kubeconfig file with one context per master so you can access the API from any master directly by just switching from a context to another. This:

  1. Eliminates the requirement for the load balancer, so it's a bit cheaper
  2. Since the access to the API is done directly from the masters, any restrictions applied via firewall (by specifying the allowed networks in the cluster config file) will work even with HA clusters, improving security.

This means that if you have upgraded an HA 1.x cluster to 2.x, you can safely delete the load balancer than 1.x created for the API (just make sure you delete the right load balancer of course).

Hope this helps. I guess we can close this issue now since it's resolved?

ThePlay3r commented 2 months ago

Thanks for the explanation.

This concept is new for me and as such a bit confusing, guess I will have to do a little bit more research into it. Mainly because I don't understand how I can have "3 clusters" that act as 1?

Anyway, that's no longer related to my issue - that was resolved.

Thanks!

vitobotta commented 2 months ago

It's not 3 clusters, it's 3 contexts referring to the same cluster, to be able to access the cluster from any master. You can just ignore it and just use the kubeconfig as usual. The multiple contexts are only useful when for example the master you are currently using (the default is set automatically by the tool) is having problems/is down, so it allows you to connect to the cluster from another master by just switching context. It's as simple as that :)