vitobotta / hetzner-k3s

The easiest and fastest way to create and manage Kubernetes clusters in Hetzner Cloud using the lightweight distribution k3s by Rancher.
MIT License
1.91k stars 141 forks source link

Failed to create private network #427

Closed artemistomaras closed 2 months ago

artemistomaras commented 2 months ago

Version: v2.0.5

Configuration:

hetzner_token: token
cluster_name: hello-world
# hetzner-k3s gives the next names to hosts: hello-world-cx21-master1 / hello-world-cpx21-pool-cpx31-worker1
kubeconfig_path: "./kubeconfig"
# or /cluster/kubeconfig if you are going to use Docker
k3s_version: v1.23.3+k3s1
public_ssh_key_path: "~/.ssh/id_rsa.pub"
private_ssh_key_path: "~/.ssh/id_rsa"
use_ssh_agent: true
ssh_allowed_networks:
 - 0.0.0.0/0
api_allowed_networks:
 - 0.0.0.0/0
schedule_workloads_on_masters: false
masters_pool:
  instance_type: cx21
  instance_count: 3
  location: nbg1
worker_node_pools:
- name: small
  instance_type: cpx21
  instance_count: 4
  location: hel1
- name: big
  instance_type: cpx31
  instance_count: 2
  location: fsn1
  autoscaling:
    enabled: true
    min_instances: 0
    max_instances: 3

Error:

[Private Network] Failed to create private network: {
    "error": {
        "code": "json_error",
        "message": "A valid JSON document is required."
    }
}

I added some logs in client.cr:

  def post(path, params = {} of KeyType => ValueType)
    puts "#{api_url}#{path}"
    puts params.to_json
    puts headers
    response = with_rate_limit do
      Crest.post(
        "#{api_url}#{path}",
        params.to_json,
        json: true,
        headers: headers,
        handle_errors: false
      )
    end

    handle_response(response)
  end

I get:

https://api.hetzner.cloud/v1/networks
{"name":"hello-world","ip_range":"10.0.0.0/16","subnets":[{"ip_range":"10.0.0.0/16","network_zone":"eu-central","type":"cloud"}]}
{"Authorization" => "Bearer token"}
[Private Network] Failed to create private network: {
    "error": {
        "code": "json_error",
        "message": "A valid JSON document is required."
    }
}

Using cURL with the above payload works fine:

 curl \
        -X POST \
        -H "Authorization: Bearer token" \
        -H "Content-Type: application/json" \
        -d '{"name":"hello-world","ip_range":"10.0.0.0/16","subnets":[{"ip_range":"10.0.0.0/16","network_zone":"eu-central","type":"cloud"}]}' \
        "https://api.hetzner.cloud/v1/networks"
{
    "network": {
        "name": "hello-world",
        "created": "2024-08-29T10:03:01Z",
        "id": 10078568,
        "ip_range": "10.0.0.0/16",
        "labels": {},
        "load_balancers": [],
        "servers": [],
        "protection": {
            "delete": false
        },
        "routes": [],
        "subnets": [
            {
                "gateway": "10.0.0.1",
                "ip_range": "10.0.0.0/16",
                "network_zone": "eu-central",
                "type": "cloud",
                "vswitch_id": null
            }
        ],
        "expose_routes_to_vswitch": false
    }
}
vitobotta commented 2 months ago

Hi, you're using the old config format. The format has changed in v2.0.0 so please check the release notes for that version if you are upgrading an existing cluster. If not, check the "Creating a cluster" page for information on the new configuration format.

artemistomaras commented 2 months ago

I get the same error with this:

---
hetzner_token: token
cluster_name: test
kubeconfig_path: "./kubeconfig"
k3s_version: v1.30.3+k3s1

networking:
  ssh:
    port: 22
    use_agent: false # set to true if your key has a passphrase
    public_key_path: "~/.ssh/id_rsa.pub"
    private_key_path: "~/.ssh/id_rsa"
  allowed_networks:
    ssh:
      - 0.0.0.0/0
    api: # this will firewall port 6443 on the nodes; it will NOT firewall the API load balancer
      - 0.0.0.0/0
  public_network:
    ipv4: true
    ipv6: false
  private_network:
    enabled: true
    subnet: 10.0.0.0/16
    existing_network_name: ""
  cni:
    enabled: true
    encryption: false
    mode: flannel

  # cluster_cidr: 10.244.0.0/16 # optional: a custom IPv4/IPv6 network CIDR to use for pod IPs
  # service_cidr: 10.43.0.0/16 # optional: a custom IPv4/IPv6 network CIDR to use for service IPs. Warning, if you change this, you should also change cluster_dns!
  # cluster_dns: 10.43.0.10 # optional: IPv4 Cluster IP for coredns service. Needs to be an address from the service_cidr range

# manifests:
#   cloud_controller_manager_manifest_url: "https://github.com/hetznercloud/hcloud-cloud-controller-manager/releases/download/v1.20.0/ccm-networks.yaml"
#   csi_driver_manifest_url: "https://raw.githubusercontent.com/hetznercloud/csi-driver/v2.9.0/deploy/kubernetes/hcloud-csi.yml"
#   system_upgrade_controller_deployment_manifest_url: "https://github.com/rancher/system-upgrade-controller/releases/download/v0.13.4/system-upgrade-controller.yaml"
#   system_upgrade_controller_crd_manifest_url: "https://github.com/rancher/system-upgrade-controller/releases/download/v0.13.4/crd.yaml"
#   cluster_autoscaler_manifest_url: "https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/hetzner/examples/cluster-autoscaler-run-on-master.yaml"

datastore:
  mode: etcd # etcd (default) or external
  external_datastore_endpoint: postgres://....

schedule_workloads_on_masters: false

# image: rocky-9 # optional: default is ubuntu-24.04
# autoscaling_image: 103908130 # optional, defaults to the `image` setting
# snapshot_os: microos # optional: specified the os type when using a custom snapshot

masters_pool:
  instance_type: cpx21
  instance_count: 3
  location: nbg1

worker_node_pools:
- name: small-static
  instance_type: cpx21
  instance_count: 4
  location: nbg1
  # image: debian-11
  # labels:
  #   - key: purpose
  #     value: blah
  # taints:
  #   - key: something
  #     value: value1:NoSchedule
- name: medium-autoscaled
  instance_type: cpx31
  instance_count: 2
  location: nbg1
  autoscaling:
    enabled: true
    min_instances: 0
    max_instances: 3

embedded_registry_mirror:
  enabled: true

# additional_packages:
# - somepackage

# post_create_commands:
# - apt update
# - apt upgrade -y
# - apt autoremove -y

# kube_api_server_args:
# - arg1
# - ...
# kube_scheduler_args:
# - arg1
# - ...
# kube_controller_manager_args:
# - arg1
# - ...
# kube_cloud_controller_manager_args:
# - arg1
# - ...
# kubelet_args:
# - arg1
# - ...
# kube_proxy_args:
# - arg1
# - ...
# api_server_hostname: k8s.example.com # optional: DNS for the k8s API LoadBalancer. After the script has run, create a DNS record with the address of the API LoadBalancer.

Output:

/home/app/hetzner-k3s  > ./hetzner-k3s create --config cluster-config.yaml | tee create.log
[Configuration] Validating configuration...
[Configuration] ...configuration seems valid.
[Private Network] Creating private network...
https://api.hetzner.cloud/v1/networks
{"name":"test","ip_range":"10.0.0.0/16","subnets":[{"ip_range":"10.0.0.0/16","network_zone":"eu-central","type":"cloud"}]}
{"Authorization" => "Bearer token"}
[Private Network] Failed to create private network: {
    "error": {
        "code": "json_error",
        "message": "A valid JSON document is required."
    }
}
[Private Network] Retrying to create private network in 5 seconds..
LarvePoire commented 2 months ago

I’m getting the same error as well. In fact, I created an issue on GitHub about it before I saw yours—my bad! Here’s the link: https://github.com/vitobotta/hetzner-k3s/issues/428.

vitobotta commented 2 months ago

I am able to reproduce the issue now and am looking into it. This is super weird because just after releasing 2.0.5 I created another cluster without any issues.

LarvePoire commented 2 months ago

Okay, thank you so much for looking into it. It’s a relief to know that you were able to reproduce the error, because I have to admit, I was completely baffled by it.

There might have been some changes or bugs on the Hetzner Cloud API side, but that’s just a guess.

vitobotta commented 2 months ago

I fixed it. It was due to a change in the Crest library update from a PR I merged. It's now building v2.0.6 with the fix. I'll update here when ready.

vitobotta commented 2 months ago

This is the Github Action build https://github.com/vitobotta/hetzner-k3s/actions/runs/10616795551

You can try 2.0.6 as soon as the binary for your OS is ready :)

vitobotta commented 2 months ago

2.0.6 is ready, please take it for a spin :)

LarvePoire commented 2 months ago

2.0.6 is ready, please take it for a spin :)

Bravo! I tested it, and it worked perfectly for me. Sorry for the delay—I waited for Homebrew to detect the update instead of doing it manually . Hats off to you, and thanks for your impressive responsiveness!

vitobotta commented 2 months ago

Thanks for confirming!

artemistomaras commented 2 months ago

@vitobotta it works now, I can create a cluster but now I have different issues:

The following config:

---
hetzner_token: token
cluster_name: test
kubeconfig_path: "./kubeconfig"
k3s_version: v1.30.3+k3s1

networking:
  ssh:
    port: 22
    use_agent: false # set to true if your key has a passphrase
    public_key_path: "id_rsa.pub"
    private_key_path: "id_rsa"
  allowed_networks:
    ssh:
      - 0.0.0.0/0
    api: # this will firewall port 6443 on the nodes; it will NOT firewall the API load balancer
      - 0.0.0.0/0
  public_network:
    ipv4: true
    ipv6: true
  private_network:
    enabled : true
    subnet: 10.0.0.0/16
    existing_network_name: ""
  cni:
    enabled: true
    encryption: false
    mode: flannel

  # cluster_cidr: 10.244.0.0/16 # optional: a custom IPv4/IPv6 network CIDR to use for pod IPs
  # service_cidr: 10.43.0.0/16 # optional: a custom IPv4/IPv6 network CIDR to use for service IPs. Warning, if you change this, you should also change cluster_dns!
  # cluster_dns: 10.43.0.10 # optional: IPv4 Cluster IP for coredns service. Needs to be an address from the service_cidr range

# manifests:
#   cloud_controller_manager_manifest_url: "https://github.com/hetznercloud/hcloud-cloud-controller-manager/releases/download/v1.20.0/ccm-networks.yaml"
#   csi_driver_manifest_url: "https://raw.githubusercontent.com/hetznercloud/csi-driver/v2.9.0/deploy/kubernetes/hcloud-csi.yml"
#   system_upgrade_controller_deployment_manifest_url: "https://github.com/rancher/system-upgrade-controller/releases/download/v0.13.4/system-upgrade-controller.yaml"
#   system_upgrade_controller_crd_manifest_url: "https://github.com/rancher/system-upgrade-controller/releases/download/v0.13.4/crd.yaml"
#   cluster_autoscaler_manifest_url: "https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/hetzner/examples/cluster-autoscaler-run-on-master.yaml"

datastore:
  mode: etcd # etcd (default) or external
  external_datastore_endpoint: postgres://....

schedule_workloads_on_masters: false

# image: rocky-9 # optional: default is ubuntu-24.04
# autoscaling_image: 103908130 # optional, defaults to the `image` setting
# snapshot_os: microos # optional: specified the os type when using a custom snapshot

masters_pool:
  instance_type: cx22
  instance_count: 3
  location: nbg1

worker_node_pools:
- name: small-static
  instance_type: cpx21
  instance_count: 1
  location: hel1
  # image: debian-11
  # labels:
  #   - key: purpose
  #     value: blah
  # taints:
  #   - key: something
  #     value: value1:NoSchedule
- name: medium-autoscaled
  instance_type: cpx21
  instance_count: 1
  location: fsn1
  autoscaling:
    enabled: true
    min_instances: 1
    max_instances: 3

embedded_registry_mirror:
  enabled: true

# additional_packages:
# - somepackage

# post_create_commands:
# - apt update
# - apt upgrade -y
# - apt autoremove -y

# kube_api_server_args:
# - arg1
# - ...
# kube_scheduler_args:
# - arg1
# - ...
# kube_controller_manager_args:
# - arg1
# - ...
# kube_cloud_controller_manager_args:
# - arg1
# - ...
# kubelet_args:
# - arg1
# - ...
# kube_proxy_args:
# - arg1
# - ...
# api_server_hostname: k8s.example.com # optional: DNS for the k8s API LoadBalancer. After the script has run, create a DNS record with the address of the API LoadBalancer.

Does not create the second node-pool (medium-autoscaled) nor a LoadBalancer in front of the kubernetes API.

Logs are:

[Configuration] Validating configuration...
[Configuration] ...configuration seems valid.
[Private Network] Creating private network...
[Private Network] ...private network created
[SSH key] Creating SSH key...
[SSH key] ...SSH key created
[Placement groups] Creating placement group test-masters...
[Placement groups] ...placement group test-masters created
[Placement groups] Creating placement group test-small-static-2...
[Placement groups] ...placement group test-small-static-2 created
[Instance test-master2] Creating instance test-master2 (attempt 1)...
[Instance test-master3] Creating instance test-master3 (attempt 1)...
[Instance test-master1] Creating instance test-master1 (attempt 1)...
[Instance test-master2] Instance status: off
[Instance test-master2] Powering on instance (attempt 1)
[Instance test-master2] Waiting for instance to be powered on...
[Instance test-master1] Instance status: off
[Instance test-master1] Powering on instance (attempt 1)
[Instance test-master1] Waiting for instance to be powered on...
[Instance test-master3] Instance status: off
[Instance test-master3] Powering on instance (attempt 1)
[Instance test-master3] Waiting for instance to be powered on...
[Instance test-master2] Instance status: running
[Instance test-master1] Instance status: running
[Instance test-master3] Instance status: running
[Instance test-master2] Waiting for successful ssh connectivity with instance test-master2...
[Instance test-master1] Waiting for successful ssh connectivity with instance test-master1...
[Instance test-master3] Waiting for successful ssh connectivity with instance test-master3...
[Instance test-master2] ...instance test-master2 is now up.
[Instance test-master2] ...instance test-master2 created
[Instance test-master3] ...instance test-master3 is now up.
[Instance test-master3] ...instance test-master3 created
[Instance test-master1] ...instance test-master1 is now up.
[Instance test-master1] ...instance test-master1 created
[Firewall] Creating firewall...
[Firewall] ...firewall created
[Instance test-pool-small-static-worker1] Creating instance test-pool-small-static-worker1 (attempt 1)...
[Instance test-master2] Cloud init finished: 30.24 - Fri, 30 Aug 2024 11:25:36 +0000 - v. 24.1.3-0ubuntu3.3
[Instance test-master2] [INFO]  Using v1.30.3+k3s1 as release
[Instance test-master2] [INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.30.3+k3s1/sha256sum-amd64.txt
[Instance test-master2] [INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.30.3+k3s1/k3s
[Instance test-master2] [INFO]  Verifying binary download
[Instance test-master2] [INFO]  Installing k3s to /usr/local/bin/k3s
[Instance test-master2] [INFO]  Skipping installation of SELinux RPM
[Instance test-master2] [INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[Instance test-master2] [INFO]  Creating /usr/local/bin/crictl symlink to k3s
[Instance test-master2] [INFO]  Creating /usr/local/bin/ctr symlink to k3s
[Instance test-master2] [INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[Instance test-master2] [INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[Instance test-master2] [INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[Instance test-master2] [INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[Instance test-master2] [INFO]  systemd: Enabling k3s unit
[Instance test-master2] [INFO]  systemd: Starting k3s
[Instance test-pool-small-static-worker1] Instance status: off
[Instance test-pool-small-static-worker1] Powering on instance (attempt 1)
[Instance test-pool-small-static-worker1] Waiting for instance to be powered on...
[Instance test-master2] Waiting for the control plane to be ready...
[Instance test-pool-small-static-worker1] Instance status: running
[Control plane] Generating the kubeconfig file to /home/artemis/KubernetesProjects/hetzner/hetzner-k3s/kubeconfig...
Switched to context "test-master2".
[Control plane] ...kubeconfig file generated as /home/artemis/KubernetesProjects/hetzner/hetzner-k3s/kubeconfig.
[Instance test-pool-small-static-worker1] Waiting for successful ssh connectivity with instance test-pool-small-static-worker1...
[Instance test-master2] ...k3s deployed
[Instance test-pool-small-static-worker1] ...instance test-pool-small-static-worker1 is now up.
[Instance test-pool-small-static-worker1] ...instance test-pool-small-static-worker1 created
[Instance test-master3] Cloud init finished: 45.36 - Fri, 30 Aug 2024 11:25:56 +0000 - v. 24.1.3-0ubuntu3.3
[Instance test-master1] Cloud init finished: 62.31 - Fri, 30 Aug 2024 11:26:14 +0000 - v. 24.1.3-0ubuntu3.3
[Instance test-master3] [INFO]  Using v1.30.3+k3s1 as release
[Instance test-master3] [INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.30.3+k3s1/sha256sum-amd64.txt
[Instance test-master1] [INFO]  Using v1.30.3+k3s1 as release
[Instance test-master1] [INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.30.3+k3s1/sha256sum-amd64.txt
[Instance test-master3] [INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.30.3+k3s1/k3s
[Instance test-master1] [INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.30.3+k3s1/k3s
[Instance test-master3] [INFO]  Verifying binary download
[Instance test-master3] [INFO]  Installing k3s to /usr/local/bin/k3s
[Instance test-master3] [INFO]  Skipping installation of SELinux RPM
[Instance test-master3] [INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[Instance test-master3] [INFO]  Creating /usr/local/bin/crictl symlink to k3s
[Instance test-master3] [INFO]  Creating /usr/local/bin/ctr symlink to k3s
[Instance test-master3] [INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[Instance test-master3] [INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[Instance test-master3] [INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[Instance test-master3] [INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[Instance test-master3] [INFO]  systemd: Enabling k3s unit
[Instance test-master1] [INFO]  Verifying binary download
[Instance test-master1] [INFO]  Installing k3s to /usr/local/bin/k3s
[Instance test-master1] [INFO]  Skipping installation of SELinux RPM
[Instance test-master1] [INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[Instance test-master1] [INFO]  Creating /usr/local/bin/crictl symlink to k3s
[Instance test-master1] [INFO]  Creating /usr/local/bin/ctr symlink to k3s
[Instance test-master1] [INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[Instance test-master1] [INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[Instance test-master1] [INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[Instance test-master1] [INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[Instance test-master1] [INFO]  systemd: Enabling k3s unit
[Instance test-master3] [INFO]  systemd: Starting k3s
[Instance test-master1] [INFO]  systemd: Starting k3s
[Instance test-master3] ...k3s deployed
[Instance test-master1] ...k3s deployed
[Control plane] Generating the kubeconfig file to /home/artemis/KubernetesProjects/hetzner/hetzner-k3s/kubeconfig...
Switched to context "test-master2".
[Control plane] ...kubeconfig file generated as /home/artemis/KubernetesProjects/hetzner/hetzner-k3s/kubeconfig.
[Hetzner Cloud Secret] Creating secret for Hetzner Cloud token...
[Hetzner Cloud Secret] secret/hcloud created
[Hetzner Cloud Secret] ...secret created
[Hetzner Cloud Controller] Installing Hetzner Cloud Controller Manager...
[Hetzner Cloud Controller] serviceaccount/hcloud-cloud-controller-manager created
[Hetzner Cloud Controller] clusterrolebinding.rbac.authorization.k8s.io/system:hcloud-cloud-controller-manager created
[Hetzner Cloud Controller] deployment.apps/hcloud-cloud-controller-manager created
[Hetzner Cloud Controller] Hetzner Cloud Controller Manager installed
[Hetzner CSI Driver] Installing Hetzner CSI Driver...
[Hetzner CSI Driver] serviceaccount/hcloud-csi-controller created
[Hetzner CSI Driver] storageclass.storage.k8s.io/hcloud-volumes created
[Hetzner CSI Driver] clusterrole.rbac.authorization.k8s.io/hcloud-csi-controller created
[Hetzner CSI Driver] clusterrolebinding.rbac.authorization.k8s.io/hcloud-csi-controller created
[Hetzner CSI Driver] service/hcloud-csi-controller-metrics created
[Hetzner CSI Driver] service/hcloud-csi-node-metrics created
[Hetzner CSI Driver] daemonset.apps/hcloud-csi-node created
[Hetzner CSI Driver] deployment.apps/hcloud-csi-controller created
[Hetzner CSI Driver] csidriver.storage.k8s.io/csi.hetzner.cloud created
[Hetzner CSI Driver] Hetzner CSI Driver installed
[System Upgrade Controller] Installing System Upgrade Controller...
[System Upgrade Controller] namespace/system-upgrade created
[System Upgrade Controller] customresourcedefinition.apiextensions.k8s.io/plans.upgrade.cattle.io created
[System Upgrade Controller] clusterrole.rbac.authorization.k8s.io/system-upgrade-controller created
[System Upgrade Controller] role.rbac.authorization.k8s.io/system-upgrade-controller created
[System Upgrade Controller] clusterrole.rbac.authorization.k8s.io/system-upgrade-controller-drainer created
[System Upgrade Controller] clusterrolebinding.rbac.authorization.k8s.io/system-upgrade-drainer created
[System Upgrade Controller] clusterrolebinding.rbac.authorization.k8s.io/system-upgrade created
[System Upgrade Controller] rolebinding.rbac.authorization.k8s.io/system-upgrade created
[System Upgrade Controller] namespace/system-upgrade configured
[System Upgrade Controller] serviceaccount/system-upgrade created
[System Upgrade Controller] configmap/default-controller-env created
[System Upgrade Controller] deployment.apps/system-upgrade-controller created
[System Upgrade Controller] ...System Upgrade Controller installed
[Cluster Autoscaler] Installing Cluster Autoscaler...
[Cluster Autoscaler] serviceaccount/cluster-autoscaler created
[Cluster Autoscaler] clusterrole.rbac.authorization.k8s.io/cluster-autoscaler created
[Cluster Autoscaler] role.rbac.authorization.k8s.io/cluster-autoscaler created
[Cluster Autoscaler] clusterrolebinding.rbac.authorization.k8s.io/cluster-autoscaler created
[Cluster Autoscaler] rolebinding.rbac.authorization.k8s.io/cluster-autoscaler created
[Cluster Autoscaler] deployment.apps/cluster-autoscaler created
[Cluster Autoscaler] ...Cluster Autoscaler installed
[Instance test-pool-small-static-worker1] Cloud init finished: 25.52 - Fri, 30 Aug 2024 11:26:24 +0000 - v. 24.1.3-0ubuntu3.3
[Instance test-pool-small-static-worker1] [INFO]  Using v1.30.3+k3s1 as release
[Instance test-pool-small-static-worker1] [INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.30.3+k3s1/sha256sum-amd64.txt
[Instance test-pool-small-static-worker1] [INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.30.3+k3s1/k3s
[Instance test-pool-small-static-worker1] [INFO]  Verifying binary download
[Instance test-pool-small-static-worker1] [INFO]  Installing k3s to /usr/local/bin/k3s
[Instance test-pool-small-static-worker1] [INFO]  Skipping installation of SELinux RPM
[Instance test-pool-small-static-worker1] [INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[Instance test-pool-small-static-worker1] [INFO]  Creating /usr/local/bin/crictl symlink to k3s
[Instance test-pool-small-static-worker1] [INFO]  Creating /usr/local/bin/ctr symlink to k3s
[Instance test-pool-small-static-worker1] [INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[Instance test-pool-small-static-worker1] [INFO]  Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
[Instance test-pool-small-static-worker1] [INFO]  env: Creating environment file /etc/systemd/system/k3s-agent.service.env
[Instance test-pool-small-static-worker1] [INFO]  systemd: Creating service file /etc/systemd/system/k3s-agent.service
[Instance test-pool-small-static-worker1] [INFO]  systemd: Enabling k3s-agent unit
[Instance test-pool-small-static-worker1] [INFO]  systemd: Starting k3s-agent
[Instance test-pool-small-static-worker1] ...k3s has been deployed to worker test-pool-small-static-worker1.

And kubeconfig looks like this:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: data
    server: https://<ip>:6443
  name: test-master1
- cluster:
    certificate-authority-data: data
    server: https://<ip>:6443
  name: test-master2
- cluster:
    certificate-authority-data: data
    server: https://<ip>:6443
  name: test-master3
contexts:
- context:
    cluster: test-master1
    user: test-master1
  name: test-master1
- context:
    cluster: test-master2
    user: test-master2
  name: test-master2
- context:
    cluster: test-master3
    user: test-master3
  name: test-master3
current-context: test-master2
kind: Config
preferences: {}
users:
- name: test-master1
  user:
    client-certificate-data: data
    client-key-data: data
- name: test-master2
  user:
    client-certificate-data: data
    client-key-data: data
- name: test-master3
  user:
    client-certificate-data: data
    client-key-data: data

Screenshot_20240830_142812-1

vitobotta commented 2 months ago
  1. You configured the second node pool as autoscaled, meaning that its nodes will only be created when needed. I see you have set the min to 1 but IIRC the behavior of the autoscaler is such that it will keep the min instances after creating some instances when needed due to workloads pending scheduling. I know it's a bit weird but that's how it works. You can double check that the autoscaler is working, though, by temporarily deploying a workload with enough memory request to trigger the creation of multiple nodes and then deleting the deployment after those nodes have been created. After a while (I think it's 10 minutes), the autoscaler will remove the extra nodes but will keep the min 1 instance in your cade.

  2. The new version of hetzner-k3s no longer creates a load balancer for the API. In fact, if you upgrade a cluster created with 1.x you can now delete the load balancer that was created before. The load balancer is now replaced by a multi context kubeconfig that allows you to connect directly to any of the masters. This was done due to many requests to remove the extra cost of the load balancer but also, and more importantly, for security reasons: load balancers in Hetzner Cloud aren't supported by the firewalls yet, meaning that even if you specify, in the config file, that you want to restrict access to the API to specific networks, you can't if you use the load balancer. But now, since you are connecting directly to one of the masters, it works just fine so it's more secure. There is no difference for you in terms of "using the cluster". The new kubeconfig just selects one master as default context but if needed (e.g. that master is temporarily down or something) you can just switch to the context for another master. But other than situations where the default master is having some problems, there is no difference at all between connecting to the cluster the new way and connecting to it via load balancer. But yeah, with the new method (which is what Rancher also uses btw) clusters are cheaper and more secure.

artemistomaras commented 2 months ago

Thank you very much! I did not know the details of 1) & 2)

vitobotta commented 2 months ago

Thank you very much! I did not know the details of 1) & 2)

I should probably add some notes in the docs for the autoscaler behavior but the other thing for the kubeconfig is mentioned in the release notes for v2.0.0. I will add a link to it in the release notes for the newer versions so that people using one of those don't miss it.

artemistomaras commented 2 months ago

Thank you very much! I did not know the details of 1) & 2)

I should probably add some notes in the docs for the autoscaler behavior but the other thing for the kubeconfig is mentioned in the release notes for v2.0.0. I will add a link to it in the release notes for the newer versions so that people using one of those don't miss it.

I saw this https://github.com/vitobotta/hetzner-k3s/blob/main/docs/Setting%20up%20a%20cluster.md#1-what-load-balancers-will-be-installed

one for Kubernetes API (this one will be installed automatically by hetzner-k3s);

and I was confused

vitobotta commented 2 months ago

Thank you very much! I did not know the details of 1) & 2)

I should probably add some notes in the docs for the autoscaler behavior but the other thing for the kubeconfig is mentioned in the release notes for v2.0.0. I will add a link to it in the release notes for the newer versions so that people using one of those don't miss it.

I saw this https://github.com/vitobotta/hetzner-k3s/blob/main/docs/Setting%20up%20a%20cluster.md#1-what-load-balancers-will-be-installed

one for Kubernetes API (this one will be installed automatically by hetzner-k3s);

and I was confused

That's now fixed, thanks for pointing it out :)