techno-tim / k3s-ansible

The easiest way to bootstrap a self-hosted High Availability Kubernetes cluster. A fully automated HA k3s etcd install with kube-vip, MetalLB, and more. Build. Destroy. Repeat.
https://technotim.live/posts/k3s-etcd-ansible/
Apache License 2.0
2.41k stars 1.05k forks source link

Automation hangs on k3s_agent : Enable and check K3s service #374

Closed AnthonyH26 closed 1 year ago

AnthonyH26 commented 1 year ago

Expected Behavior

Automation should work

Current Behavior

Automation hangs on "k3s_agent : Enable and check K3s service". Kube-VIP virtual IP never comes up

Steps already taken

1.) Ensured the secret was alphanumeric (no dashes) 2.) ran reset script and also recreated VMs 3.) As using Jammy (22.04) I have tried both eth0 and ens18 (Which is an alias) 4.) Looked at similar issues/discussions

Unsure if this is useful information but currently hosting this through VMWare Workstation with the NAT networking, DHCP/DNS is provided by Windows Server on the 10.0.0.0/8 subnet (DHCP assigned to the range 10.0.1.1-10.0.1.254)

Also unsure if my terraform template will be useful here or not:

terraform {
  required_providers {
    proxmox = {
      source = "telmate/proxmox"
      version = "2.9.14"
    }
  }
}

provider "proxmox" {
  # url is the hostname (FQDN if you have one) for the proxmox host you'd like to connect to to issue the commands. my proxmox host is 'prox-1u'. Add /api2/json at the end for the API
  pm_api_url = "https://pve.lab.local:8006/api2/json"
  pm_user = "root@pam"
  pm_password = "<redacted>"
  # api token id is in the form of: <username>@pam!<tokenId>
  #pm_user = "root@pam"
  # this is the full secret wrapped in quotes. don't worry, I've already deleted this from my proxmox cluster by the time you read this post
  # leave tls_insecure set to true unless you have your proxmox SSL certificate situation fully sorted out (if you do, you will know)
  pm_tls_insecure = true
}

# resource is formatted to be "[type]" "[entity_name]" so in this case
# we are looking to create a proxmox_vm_qemu entity named test_server
resource "proxmox_vm_qemu" "k3s-master" {
  count = 1 # just want 1 for now, set to 0 and apply to destroy VM

  # this now reaches out to the vars file. I could've also used this var above in the pm_api_url setting but wanted to spell it out up there. target_node is different than api_url. target_node is which node hosts the template and thus also which node will host the new VM. it can be different than the host you use to communicate with the API. the variable contains the contents "prox-1u"
  target_node = var.proxmox_host
  # another variable with contents "ubuntu-2004-cloudinit-template"
  clone = var.template_name
  name = "k3s-master-${count.index + 1}.lab.local"

  provisioner "local-exec" {
    command = "ssh-keygen -f '/home/<username>/.ssh/known_hosts' -R 'k3s-master-${count.index + 1}'"

  }
  # basic VM settings here. agent refers to guest agent
  agent = 1
  os_type = "cloud-init"
  cores = 2
  sockets = 1
  cpu = "kvm64"
  memory = 2048
  scsihw = "virtio-scsi-pci"
  bootdisk = "scsi0"
  kvm = false
  ciuser = "<username>"

  args = "-smbios type=1,serial=ds=nocloud-net;h=k3s-master-${count.index + 1}.lab.local"

  disk {
    slot = 0
    # set disk size here. leave it small for testing because expanding the disk takes time.
    size = "35G"
    type = "scsi"
    storage = "local-lvm"
    discard = "ignore"
  }

  # if you want two NICs, just copy this whole network section and duplicate it
  network {
    model = "virtio"
    bridge = "vmbr0"
  }
  # not sure exactly what this is for. presumably something about MAC addresses and ignore network changes during the life of the VM
  lifecycle {
    ignore_changes = [
      network,
    ]
  }

  # the ${count.index + 1} thing appends text to the end of the ip address
  # in this case, since we are only adding a single VM, the IP will
  # be 10.98.1.91 since count.index starts at 0. this is how you can create
  # multiple VMs and have an IP assigned to each (.91, .92, .93, etc.)
  ipconfig0 = "ip=dhcp"

  # sshkeys set using variables. the variable contains the text of the key.
  sshkeys = <<EOF
  ${var.ssh_key}
  EOF
}

resource "proxmox_vm_qemu" "k3s-node" {
  count = 2 # just want 1 for now, set to 0 and apply to destroy VM

  # this now reaches out to the vars file. I could've also used this var above in the pm_api_url setting but wanted to spell it out up there. target_node is different than api_url. target_node is which node hosts the template and thus also which node will host the new VM. it can be different than the host you use to communicate with the API. the variable contains the contents "prox-1u"
  target_node = var.proxmox_host
  # another variable with contents "ubuntu-2004-cloudinit-template"
  clone = var.template_name
  name = "k3s-node-${count.index + 1}.lab.local"

  provisioner "local-exec" {
    command = "ssh-keygen -f '/home/<username>/.ssh/known_hosts' -R 'k3s-node-${count.index + 1}'"

  }
  # basic VM settings here. agent refers to guest agent
  agent = 1
  os_type = "cloud-init"
  cores = 2
  sockets = 1
  cpu = "kvm64"
  memory = 2048
  scsihw = "virtio-scsi-pci"
  bootdisk = "scsi0"
  kvm = false
  ciuser = "<username>"

  args = "-smbios type=1,serial=ds=nocloud-net;h=k3s-node-${count.index + 1}.lab.local"

  disk {
    slot = 0
    # set disk size here. leave it small for testing because expanding the disk takes time.
    size = "35G"
    type = "scsi"
    storage = "local-lvm"
    discard = "ignore"
  }

  # if you want two NICs, just copy this whole network section and duplicate it
  network {
    model = "virtio"
    bridge = "vmbr0"
  }
  # not sure exactly what this is for. presumably something about MAC addresses and ignore network changes during the life of the VM
  lifecycle {
    ignore_changes = [
      network,
    ]
  }

  # the ${count.index + 1} thing appends text to the end of the ip address
  # in this case, since we are only adding a single VM, the IP will
  # be 10.98.1.91 since count.index starts at 0. this is how you can create
  # multiple VMs and have an IP assigned to each (.91, .92, .93, etc.)
  ipconfig0 = "ip=dhcp"

  # sshkeys set using variables. the variable contains the text of the key.
  sshkeys = <<EOF
  ${var.ssh_key}
  EOF
}```

## Steps to Reproduce

<!--- reproduce this bug. Include code to reproduce, if relevant -->

1. Deploy Vms using Terraform
2. Run the playbook

## Context (variables)
<!--- please include which OS, along with the variables used when running the playbook -->

Operating system: Ubuntu 20.04

Hardware:
Vmware Worksation virtualising Proxmox:
Host hardware: 64GB RAM - 32 assigned to Proxmox, 2GB to each node/master
Intel i9-12900k (12 cores assigned to proxmox, 2 to each node/master)

### Variables Used

`all.yml`

```yml
---
k3s_version: v1.25.12+k3s1
# this is the user that has ssh access to these machines
ansible_user: a09hopper
systemd_dir: /etc/systemd/system

# Set your timezone
system_timezone: "Europe/London"

# interface which will be used for flannel
flannel_iface: "eth0"

# apiserver_endpoint is virtual ip-address which will be configured on each master
apiserver_endpoint: "10.1.1.222"

# k3s_token is required  masters can talk together securely
# this token should be alpha numeric only
k3s_token: "1234567890abcdefghijklmnopqrstuvwxyz"

# The IP on which the node is reachable in the cluster.
# Here, a sensible default is provided, you can still override
# it for each of your hosts, though.
k3s_node_ip: '{{ ansible_facts[flannel_iface]["ipv4"]["address"] }}'

# Disable the taint manually by setting: k3s_master_taint = false
k3s_master_taint: "{{ true if groups['node'] | default([]) | length >= 1 else false }}"

# these arguments are recommended for servers as well as agents:
extra_args: >-
  --flannel-iface={{ flannel_iface }}
  --node-ip={{ k3s_node_ip }}

# change these to your liking, the only required are: --disable servicelb, --tls-san {{ apiserver_endpoint }}
extra_server_args: >-
  {{ extra_args }}
  {{ '--node-taint node-role.kubernetes.io/master=true:NoSchedule' if k3s_master_taint else '' }}
  --tls-san {{ apiserver_endpoint }}
  --disable servicelb
  --disable traefik
extra_agent_args: >-
  {{ extra_args }}

# image tag for kube-vip
kube_vip_tag_version: "v0.5.12"

# metallb type frr or native
metal_lb_type: "native"

# metallb mode layer2 or bgp
metal_lb_mode: "layer2"

# bgp options
# metal_lb_bgp_my_asn: "64513"
# metal_lb_bgp_peer_asn: "64512"
# metal_lb_bgp_peer_address: "192.168.30.1"

# image tag for metal lb
metal_lb_speaker_tag_version: "v0.13.9"
metal_lb_controller_tag_version: "v0.13.9"

# metallb ip range for load balancer
metal_lb_ip_range: "10.1.1.10-10.1.1.30"

# Only enable if your nodes are proxmox LXC nodes, make sure to configure your proxmox nodes
# in your hosts.ini file.
# Please read https://gist.github.com/triangletodd/02f595cd4c0dc9aac5f7763ca2264185 before using this.
# Most notably, your containers must be privileged, and must not have nesting set to true.
# Please note this script disables most of the security of lxc containers, with the trade off being that lxc
# containers are significantly more resource efficent compared to full VMs.
# Mixing and matching VMs and lxc containers is not supported, ymmv if you want to do this.
# I would only really recommend using this if you have partiularly low powered proxmox nodes where the overhead of
# VMs would use a significant portion of your available resources.
proxmox_lxc_configure: false
# the user that you would use to ssh into the host, for example if you run ssh some-user@my-proxmox-host,
# set this value to some-user
proxmox_lxc_ssh_user: root
# the unique proxmox ids for all of the containers in the cluster, both worker and master nodes
proxmox_lxc_ct_ids:
  - 200
  - 201
  - 202
  - 203
  - 204

Hosts

host.ini

[master]
k3s-master-1.lab.local

[node]
k3s-node-1.lab.local
k3s-node-2.lab.local

# only required if proxmox_lxc_configure: true
# must contain all proxmox instances that have a master or worker node
# [proxmox]
# 192.168.30.43

[k3s_cluster:children]
master
node

k3s-master-1 journalctl -u k3s logs:

   2021 Oct 03 20:21:43 k3s-master-1 k3s[2989]: W1003 20:21:43.501020    2989 reflector.go:424] k8s.io/client-go@v1.25.12-k3s1/tools/cache/reflector.go:169: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:k3s-controller" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
   2022 Oct 03 20:21:43 k3s-master-1 k3s[2989]: E1003 20:21:43.503693    2989 reflector.go:140] k8s.io/client-go@v1.25.12-k3s1/tools/cache/reflector.go:169: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:k3s-controller" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
   2023 Oct 03 20:22:03 k3s-master-1 k3s[2989]: W1003 20:22:03.407555    2989 reflector.go:424] k8s.io/client-go@v1.25.12-k3s1/tools/cache/reflector.go:169: failed to list *v1.Namespace: namespaces is forbidden: User "system:k3s-controller" cannot list resource "namespaces" in API group "" at the cluster scope
   2024 Oct 03 20:22:03 k3s-master-1 k3s[2989]: E1003 20:22:03.408114    2989 reflector.go:140] k8s.io/client-go@v1.25.12-k3s1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:k3s-controller" cannot list resource "namespaces" in API group "" at the cluster scope
   2025 Oct 03 20:22:12 k3s-master-1 k3s[2989]: W1003 20:22:12.645787    2989 reflector.go:424] k8s.io/client-go@v1.25.12-k3s1/tools/cache/reflector.go:169: failed to list *v1.Pod: pods is forbidden: User "system:k3s-controller" cannot list resource "pods" in API group "" at the cluster scope
   2026 Oct 03 20:22:12 k3s-master-1 k3s[2989]: E1003 20:22:12.646198    2989 reflector.go:140] k8s.io/client-go@v1.25.12-k3s1/tools/cache/reflector.go:169: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:k3s-controller" cannot list resource "pods" in API group "" at the cluster scope
   2027 Oct 03 20:22:28 k3s-master-1 k3s[2989]: W1003 20:22:28.572797    2989 reflector.go:424] k8s.io/client-go@v1.25.12-k3s1/tools/cache/reflector.go:169: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:k3s-controller" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
   2028 Oct 03 20:22:28 k3s-master-1 k3s[2989]: E1003 20:22:28.573232    2989 reflector.go:140] k8s.io/client-go@v1.25.12-k3s1/tools/cache/reflector.go:169: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:k3s-controller" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
   2029 Oct 03 20:22:37 k3s-master-1 k3s[2989]: W1003 20:22:37.056147    2989 reflector.go:424] k8s.io/client-go@v1.25.12-k3s1/tools/cache/reflector.go:169: failed to list *v1.Namespace: namespaces is forbidden: User "system:k3s-controller" cannot list resource "namespaces" in API group "" at the cluster scope
   2030 Oct 03 20:22:37 k3s-master-1 k3s[2989]: E1003 20:22:37.056740    2989 reflector.go:140] k8s.io/client-go@v1.25.12-k3s1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:k3s-controller" cannot list resource "namespaces" in API group "" at the cluster scope
   2031 Oct 03 20:22:51 k3s-master-1 k3s[2989]: W1003 20:22:51.672905    2989 reflector.go:424] k8s.io/client-go@v1.25.12-k3s1/tools/cache/reflector.go:169: failed to list *v1.Pod: pods is forbidden: User "system:k3s-controller" cannot list resource "pods" in API group "" at the cluster scope
   2032 Oct 03 20:22:51 k3s-master-1 k3s[2989]: E1003 20:22:51.673171    2989 reflector.go:140] k8s.io/client-go@v1.25.12-k3s1/tools/cache/reflector.go:169: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:k3s-controller" cannot list resource "pods" in API group "" at the cluster scope
   2033 Oct 03 20:23:01 k3s-master-1 k3s[2989]: W1003 20:23:01.282339    2989 reflector.go:424] k8s.io/client-go@v1.25.12-k3s1/tools/cache/reflector.go:169: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:k3s-controller" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
   2034 Oct 03 20:23:01 k3s-master-1 k3s[2989]: E1003 20:23:01.282883    2989 reflector.go:140] k8s.io/client-go@v1.25.12-k3s1/tools/cache/reflector.go:169: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:k3s-controller" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
   2035 Oct 03 20:23:06 k3s-master-1 k3s[2989]: E1003 20:23:06.603850    2989 secret.go:192] Couldn't get secret metallb-system/memberlist: secret "memberlist" not found
   2036 Oct 03 20:23:06 k3s-master-1 k3s[2989]: E1003 20:23:06.604432    2989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83656216-4a23-43cf-914b-b437885d5f03-memberlist podName:83656216-4a23-43cf-914b-b437885d5f03 nodeName:}" failed. No retries permitted until 2023-10-03 20:25:08.604308993 +0100 BST m=+651.652874490 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/83656216-4a23-43cf-914b-b437885d5f03-memberlist") pod "speaker-c752w" (UID: "83656216-4a23-43cf-914b-b437885d5f03") : secret "memberlist" not found
   2037 Oct 03 20:23:21 k3s-master-1 k3s[2989]: W1003 20:23:21.543834    2989 reflector.go:424] k8s.io/client-go@v1.25.12-k3s1/tools/cache/reflector.go:169: failed to list *v1.Namespace: namespaces is forbidden: User "system:k3s-controller" cannot list resource "namespaces" in API group "" at the cluster scope
   2038 Oct 03 20:23:21 k3s-master-1 k3s[2989]: E1003 20:23:21.545475    2989 reflector.go:140] k8s.io/client-go@v1.25.12-k3s1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:k3s-controller" cannot list resource "namespaces" in API group "" at the cluster scope
   2039 Oct 03 20:23:23 k3s-master-1 k3s[2989]: W1003 20:23:23.793000    2989 reflector.go:424] k8s.io/client-go@v1.25.12-k3s1/tools/cache/reflector.go:169: failed to list *v1.Pod: pods is forbidden: User "system:k3s-controller" cannot list resource "pods" in API group "" at the cluster scope
   2040 Oct 03 20:23:23 k3s-master-1 k3s[2989]: E1003 20:23:23.793353    2989 reflector.go:140] k8s.io/client-go@v1.25.12-k3s1/tools/cache/reflector.go:169: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:k3s-controller" cannot list resource "pods" in API group "" at the cluster scope
   2041 Oct 03 20:23:30 k3s-master-1 k3s[2989]: E1003 20:23:30.364642    2989 kubelet.go:1731] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[memberlist], unattached volumes=[memberlist kube-api-access-cdwlj]: timed out waiting for the condition" pod="metallb-system/speaker-c752w"
   2042 Oct 03 20:23:30 k3s-master-1 k3s[2989]: E1003 20:23:30.365035    2989 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[memberlist], unattached volumes=[memberlist kube-api-access-cdwlj]: timed out waiting for the condition" pod="metallb-system/speaker-c752w" podUID=83656216-4a23-43cf-914b-b437885d5f03
   2043 Oct 03 20:23:31 k3s-master-1 k3s[2989]: W1003 20:23:31.492037    2989 reflector.go:424] k8s.io/client-go@v1.25.12-k3s1/tools/cache/reflector.go:169: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:k3s-controller" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
   2044 Oct 03 20:23:31 k3s-master-1 k3s[2989]: E1003 20:23:31.499821    2989 reflector.go:140] k8s.io/client-go@v1.25.12-k3s1/tools/cache/reflector.go:169: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:k3s-controller" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
   2045 Oct 03 20:23:56 k3s-master-1 k3s[2989]: W1003 20:23:56.303828    2989 reflector.go:424] k8s.io/client-go@v1.25.12-k3s1/tools/cache/reflector.go:169: failed to list *v1.Pod: pods is forbidden: User "system:k3s-controller" cannot list resource "pods" in API group "" at the cluster scope
   2046 Oct 03 20:23:56 k3s-master-1 k3s[2989]: E1003 20:23:56.304112    2989 reflector.go:140] k8s.io/client-go@v1.25.12-k3s1/tools/cache/reflector.go:169: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:k3s-controller" cannot list resource "pods" in API group "" at the cluster scope
   2047 Oct 03 20:24:02 k3s-master-1 k3s[2989]: W1003 20:24:02.831049    2989 reflector.go:424] k8s.io/client-go@v1.25.12-k3s1/tools/cache/reflector.go:169: failed to list *v1.Namespace: namespaces is forbidden: User "system:k3s-controller" cannot list resource "namespaces" in API group "" at the cluster scope
   2048 Oct 03 20:24:02 k3s-master-1 k3s[2989]: E1003 20:24:02.835489    2989 reflector.go:140] k8s.io/client-go@v1.25.12-k3s1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:k3s-controller" cannot list resource "namespaces" in API group "" at the cluster scope
   2049 Oct 03 20:24:02 k3s-master-1 k3s[2989]: W1003 20:24:02.921809    2989 reflector.go:424] k8s.io/client-go@v1.25.12-k3s1/tools/cache/reflector.go:169: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:k3s-controller" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
   2050 Oct 03 20:24:02 k3s-master-1 k3s[2989]: E1003 20:24:02.922269    2989 reflector.go:140] k8s.io/client-go@v1.25.12-k3s1/tools/cache/reflector.go:169: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:k3s-controller" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
   2051 Oct 03 20:24:18 k3s-master-1 k3s[2989]: time="2023-10-03T20:24:18+01:00" level=info msg="COMPACT revision 0 has already been compacted"

k3s-node-1 journalctl -u k3s-node logs

Oct 03 20:17:06 k3s-node-1 systemd[1]: Starting Lightweight Kubernetes...
Oct 03 20:17:07 k3s-node-1 k3s[2391]: time="2023-10-03T20:17:07+01:00" level=info msg="Acquiring lock file /var/lib/rancher/k3s/data/.lock"
Oct 03 20:17:07 k3s-node-1 k3s[2391]: time="2023-10-03T20:17:07+01:00" level=info msg="Preparing data dir /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6"
Oct 03 20:17:27 k3s-node-1 k3s[2391]: time="2023-10-03T20:17:27+01:00" level=info msg="Starting k3s agent v1.25.12+k3s1 (7515237f)"
Oct 03 20:17:27 k3s-node-1 k3s[2391]: time="2023-10-03T20:17:27+01:00" level=info msg="Adding server to load balancer k3s-agent-load-balancer: 10.1.1.222:6443"
Oct 03 20:17:27 k3s-node-1 k3s[2391]: time="2023-10-03T20:17:27+01:00" level=info msg="Running load balancer k3s-agent-load-balancer 127.0.0.1:6444 -> [10.1.1.222:6443] [default: 10.1.1.222:6443]"
Oct 03 20:17:33 k3s-node-1 k3s[2391]: time="2023-10-03T20:17:33+01:00" level=error msg="failed to get CA certs: Get \"https://127.0.0.1:6444/cacerts\": read tcp 127.0.0.1:55940->127.0.0.1:6444: read: connection reset by peer"
Oct 03 20:17:39 k3s-node-1 k3s[2391]: time="2023-10-03T20:17:39+01:00" level=error msg="failed to get CA certs: Get \"https://127.0.0.1:6444/cacerts\": read tcp 127.0.0.1:55954->127.0.0.1:6444: read: connection reset by peer"
Oct 03 20:17:45 k3s-node-1 k3s[2391]: time="2023-10-03T20:17:45+01:00" level=error msg="failed to get CA certs: Get \"https://127.0.0.1:6444/cacerts\": read tcp 127.0.0.1:60360->127.0.0.1:6444: read: connection reset by peer"
Oct 03 20:17:52 k3s-node-1 k3s[2391]: time="2023-10-03T20:17:52+01:00" level=error msg="failed to get CA certs: Get \"https://127.0.0.1:6444/cacerts\": read tcp 127.0.0.1:56366->127.0.0.1:6444: read: connection reset by peer"
Oct 03 20:17:58 k3s-node-1 k3s[2391]: time="2023-10-03T20:17:58+01:00" level=error msg="failed to get CA certs: Get \"https://127.0.0.1:6444/cacerts\": read tcp 127.0.0.1:56380->127.0.0.1:6444: read: connection reset by peer"
Oct 03 20:18:04 k3s-node-1 k3s[2391]: time="2023-10-03T20:18:04+01:00" level=error msg="failed to get CA certs: Get \"https://127.0.0.1:6444/cacerts\": read tcp 127.0.0.1:40384->127.0.0.1:6444: read: connection reset by peer"
Oct 03 20:18:10 k3s-node-1 k3s[2391]: time="2023-10-03T20:18:10+01:00" level=error msg="failed to get CA certs: Get \"https://127.0.0.1:6444/cacerts\": read tcp 127.0.0.1:40404->127.0.0.1:6444: read: connection reset by peer"

Possible Solution