Closed ThePlay3r closed 2 months ago
Hi, did it generate a kubeconfig file? If yes what's the content? (Remember to remove the tokens).
Have you also tried re-running the create command in case it was some temporary API or network glitch?
I've tried to run the create command multiple times. (Deleting kubeconfig and all servers with each attempt)
The kubeconfig did get generated, this is the exact content:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
I've also tried to run the command again without deleting the servers, same error, same kubeconfig.
[Configuration] Validating configuration...
[Configuration] ...configuration seems valid.
[Private Network] Private network already exists, skipping create
[SSH key] SSH key already exists, skipping create
[Placement groups] Creating placement group c5r-02-eu-central-small-dv-fsn1-4...
[Placement groups] Creating placement group c5r-02-eu-central-small-dv-nbg1-4...
[Placement groups] ...placement group c5r-02-eu-central-small-dv-fsn1-4 created
[Placement groups] ...placement group c5r-02-eu-central-small-dv-nbg1-4 created
[Instance c5r-02-eu-central-master3] Instance c5r-02-eu-central-master3 already exists, skipping create
[Instance c5r-02-eu-central-master1] Instance c5r-02-eu-central-master1 already exists, skipping create
[Instance c5r-02-eu-central-master2] Instance c5r-02-eu-central-master2 already exists, skipping create
[Instance c5r-02-eu-central-master3] Instance status: running
[Instance c5r-02-eu-central-master1] Instance status: running
[Instance c5r-02-eu-central-master2] Instance status: running
[Instance c5r-02-eu-central-master3] Waiting for successful ssh connectivity with instance c5r-02-eu-central-master3...
[Instance c5r-02-eu-central-master1] Waiting for successful ssh connectivity with instance c5r-02-eu-central-master1...
[Instance c5r-02-eu-central-master2] Waiting for successful ssh connectivity with instance c5r-02-eu-central-master2...
[Instance c5r-02-eu-central-master3] ...instance c5r-02-eu-central-master3 is now up.
[Instance c5r-02-eu-central-master2] ...instance c5r-02-eu-central-master2 is now up.
[Instance c5r-02-eu-central-master1] ...instance c5r-02-eu-central-master1 is now up.
[Firewall] Updating firewall...
[Firewall] ...firewall updated
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker2] Instance c5r-02-eu-central-pool-small-dv-fsn1-worker2 already exists, skipping create
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker1] Instance c5r-02-eu-central-pool-small-dv-fsn1-worker1 already exists, skipping create
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker2] Instance c5r-02-eu-central-pool-small-dv-nbg1-worker2 already exists, skipping create
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker1] Instance c5r-02-eu-central-pool-small-dv-nbg1-worker1 already exists, skipping create
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker2] Instance status: running
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker1] Instance status: running
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker2] Instance status: running
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker1] Instance status: running
[Instance c5r-02-eu-central-master3] Cloud init finished: 23.74 - Thu, 22 Aug 2024 22:16:51 +0000 - v. 24.1.3-0ubuntu3.3
[Instance c5r-02-eu-central-master3] [INFO] Using v1.29.0+k3s1 as release
[Instance c5r-02-eu-central-master3] [INFO] Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.29.0+k3s1/sha256sum-amd64.txt
[Instance c5r-02-eu-central-master3] [INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.29.0+k3s1/k3s
[Instance c5r-02-eu-central-master3] [INFO] Verifying binary download
[Instance c5r-02-eu-central-master3] [INFO] Installing k3s to /usr/local/bin/k3s
[Instance c5r-02-eu-central-master3] [INFO] Skipping installation of SELinux RPM
[Instance c5r-02-eu-central-master3] [INFO] Creating /usr/local/bin/kubectl symlink to k3s
[Instance c5r-02-eu-central-master3] [INFO] Creating /usr/local/bin/crictl symlink to k3s
[Instance c5r-02-eu-central-master3] [INFO] Creating /usr/local/bin/ctr symlink to k3s
[Instance c5r-02-eu-central-master3] [INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[Instance c5r-02-eu-central-master3] [INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[Instance c5r-02-eu-central-master3] [INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
[Instance c5r-02-eu-central-master3] [INFO] systemd: Creating service file /etc/systemd/system/k3s.service
[Instance c5r-02-eu-central-master3] [INFO] systemd: Enabling k3s unit
[Instance c5r-02-eu-central-master3] [INFO] systemd: Starting k3s
[Instance c5r-02-eu-central-master3] Waiting for the control plane to be ready...
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker2] Waiting for successful ssh connectivity with instance c5r-02-eu-central-pool-small-dv-fsn1-worker2...
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker1] Waiting for successful ssh connectivity with instance c5r-02-eu-central-pool-small-dv-fsn1-worker1...
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker2] Waiting for successful ssh connectivity with instance c5r-02-eu-central-pool-small-dv-nbg1-worker2...
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker1] Waiting for successful ssh connectivity with instance c5r-02-eu-central-pool-small-dv-nbg1-worker1...
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker1] ...instance c5r-02-eu-central-pool-small-dv-fsn1-worker1 is now up.
[Instance c5r-02-eu-central-pool-small-dv-fsn1-worker2] ...instance c5r-02-eu-central-pool-small-dv-fsn1-worker2 is now up.
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker1] ...instance c5r-02-eu-central-pool-small-dv-nbg1-worker1 is now up.
[Instance c5r-02-eu-central-pool-small-dv-nbg1-worker2] ...instance c5r-02-eu-central-pool-small-dv-nbg1-worker2 is now up.
[Control plane] Generating the kubeconfig file to /home/pljr/hetzner-k3s/c5r-02-eu-central/kubeconfig...
error: no context exists with the name: "c5r-02-eu-central-master3"
[Control plane] ...kubeconfig file generated as /home/pljr/hetzner-k3s/c5r-02-eu-central/kubeconfig.
Unhandled exception in spawn: timeout after 00:00:30 (Tasker::Timeout)
from /usr/lib/crystal/core/channel.cr:453:10 in 'timeout'
from /home/runner/work/hetzner-k3s/hetzner-k3s/src/kubernetes/installer.cr:124:7 in 'run'
from /usr/lib/crystal/core/fiber.cr:143:11 in 'run'
from ???
I think the problem is that the k3s version you are using doesn't support embedded registry mirror https://docs.k3s.io/installation/registry-mirror
Try upgrading k3s first. I have on my list to add a validation for this since it can be confusing :)
I'm using the latest version that I can, I originally tried with 1.30.
Output of hetzner-k3s releases
I'm using the latest version that I can, I originally tried with 1.30.
Output of
hetzner-k3s releases
Can you run rm /tmp/k3s-releases.yaml
and then run hetzner-k3s releases
again? The releases are cached to prevent issues with the Github rate limiting. You should see that 1.29 is available up to v1.29.7+k3s1.
I'm using the latest version that I can, I originally tried with 1.30. Output of
hetzner-k3s releases
Can you run
rm /tmp/k3s-releases.yaml
and then runhetzner-k3s releases
again? The releases are cached to prevent issues with the Github rate limiting. You should see that 1.29 is available up to v1.29.7+k3s1.
Thanks, this worked and the command succeeded. However in kubeconfig, I see 3 clusters (each per master-node), is that normal? I would expect only 1 cluster to be created, as was the case in 1.1.5.
Yep, it's normal :)
In 1.1.5, when you create an HA cluster it creates a load balancer for the Kubernetes API, and the load balancer distributes requests to the multiple masters in a round robin fashion. There were two issues with that approach though:
In 2.x, instead of creating a load balancer we generate a composite kubeconfig file with one context per master so you can access the API from any master directly by just switching from a context to another. This:
This means that if you have upgraded an HA 1.x cluster to 2.x, you can safely delete the load balancer than 1.x created for the API (just make sure you delete the right load balancer of course).
Hope this helps. I guess we can close this issue now since it's resolved?
Thanks for the explanation.
This concept is new for me and as such a bit confusing, guess I will have to do a little bit more research into it. Mainly because I don't understand how I can have "3 clusters" that act as 1?
Anyway, that's no longer related to my issue - that was resolved.
Thanks!
It's not 3 clusters, it's 3 contexts referring to the same cluster, to be able to access the cluster from any master. You can just ignore it and just use the kubeconfig as usual. The multiple contexts are only useful when for example the master you are currently using (the default is set automatically by the tool) is having problems/is down, so it allows you to connect to the cluster from another master by just switching context. It's as simple as that :)
Hey, I'm trying to upgrade from 1.1.5 to the new 2.0.3.
I tried to create a simple cluster in a new Hetzner project, but the create command gets stuck and throws unexpected exception.
Config:
Logs: