k3s-io / k3s

Lightweight Kubernetes
https://k3s.io
Apache License 2.0
27.85k stars 2.33k forks source link

etcd is advertising public instead of private IP #3551

Closed mway-niels closed 3 years ago

mway-niels commented 3 years ago

Environmental Info: K3s Version:

$ k3s -v
k3s version v1.21.2+k3s1 (5a67e8dc)
go version go1.16.4

Node(s) CPU architecture, OS, and Version: Hetzner Cloud, CX21 (2 vCPUs, 4 GB RAM, 40 GB SSD)

$ uname -a
Linux k3s-server-1 5.4.0-72-generic #80-Ubuntu SMP Mon Apr 12 17:35:00 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

Firewall configuration for all nodes (only applies when accessing from public network, not private network): Ingress: Allow 22/tcp from static company IP Allow ICMP from any source Allow 80/tcp from any source Allow 443/tcp from any source Allow 6443/tcp from static company IP

Egress: Allow any port to any destination

Cluster Configuration:

k3s

Formatted as: (public IP) hostname (private IP)

Describe the bug:

When trying to build a K3s cluster inside a private network joining secondary servers/agents will fail because K3s attempts to connect using the nodes public IP instead of the private one.

Steps To Reproduce:

Setup for k3s-server-1:

export CP_LB_PRIVATE_IP=10.20.30.4
export CP_LB_PUBLIC_IP=192.168.178.4
export K3S_TOKEN=XXX

curl -sfL https://get.k3s.io | sh -s - server --tls-san ${CP_LB_PRIVATE_IP} --tls-san ${CP_LB_PUBLIC_IP} --advertise-address 10.20.30.2 --cluster-init --no-deploy traefik

Cluster seems to be up and running:

$ k3s kubectl get nodes
NAME           STATUS   ROLES                       AGE     VERSION
k3s-server-1   Ready    control-plane,etcd,master   2m40s   v1.21.2+k3s1

Setup for k3s-server-2:

export CP_LB_PRIVATE_IP=10.20.30.4
export K3S_TOKEN=XXX

curl -sfL https://get.k3s.io | sh -s - server --server https://${CP_LB_PRIVATE_IP}:6443 --advertise-address 10.20.30.3

Log output from k3s-server-2:

Jun 30 08:51:10 k3s-server-2 k3s[900]: time="2021-06-30T08:51:10.686191986+02:00" level=info msg="Running kube-apiserver --advertise-address=10.20.30.3 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Jun 30 08:51:10 k3s-server-2 k3s[900]: {"level":"warn","ts":"2021-06-30T08:51:10.685+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://192.168.178.2:2379  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 192.168.178.2:2379: operation was canceled\". Reconnecting..."}
Jun 30 08:51:10 k3s-server-2 k3s[900]: {"level":"info","ts":"2021-06-30T08:51:10.685+0200","caller":"embed/etcd.go:367","msg":"closed etcd server","name":"k3s-server-2-fd291c2b","data-dir":"/var/lib/rancher/k3s/server/db/etcd","advertise-peer-urls":["http://localhost:2380"],"advertise-client-urls":["https://192.168.178.3:2379"]}
Jun 30 08:51:10 k3s-server-2 k3s[900]: {"level":"info","ts":"2021-06-30T08:51:10.685+0200","caller":"embed/etcd.go:363","msg":"closing etcd server","name":"k3s-server-2-fd291c2b","data-dir":"/var/lib/rancher/k3s/server/db/etcd","advertise-peer-urls":["http://localhost:2380"],"advertise-client-urls":["https://192.168.178.3:2379"]}
Jun 30 08:51:10 k3s-server-2 k3s[900]: {"level":"warn","ts":"2021-06-30T08:51:10.683+0200","caller":"etcdserver/cluster_util.go:76","msg":"failed to get cluster response","address":"https://192.168.178.2:2380/members","error":"Get \"https://192.168.178.2:2380/members\": dial tcp 192.168.178.2:2380: i/o timeout"}
Jun 30 08:51:09 k3s-server-2 k3s[900]: {"level":"warn","ts":"2021-06-30T08:51:09.660+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = \"transport: authentication handshake failed: context canceled\". Reconnecting..."}
Jun 30 08:51:09 k3s-server-2 k3s[900]: time="2021-06-30T08:51:09.660647362+02:00" level=error msg="Failed to check local etcd status for learner management: context deadline exceeded"
Jun 30 08:51:09 k3s-server-2 k3s[900]: {"level":"warn","ts":"2021-06-30T08:51:09.660+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"passthrough:///https://127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
Jun 30 08:51:04 k3s-server-2 k3s[900]: {"level":"info","ts":"2021-06-30T08:51:04.681+0200","caller":"etcdserver/backend.go:80","msg":"opened backend db","path":"/var/lib/rancher/k3s/server/db/etcd/member/snap/db","took":"4.558046ms"}
Jun 30 08:51:04 k3s-server-2 k3s[900]: {"level":"info","ts":"2021-06-30T08:51:04.675+0200","caller":"embed/etcd.go:302","msg":"starting an etcd server","etcd-version":"3.4.13","git-sha":"Not provided (use ./build instead of go build)","go-version":"go1.16.4","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":false,"name":"k3s-server-2-fd291c2b","data-dir":"/var/lib/rancher/k3s/server/db/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/rancher/k3s/server/db/etcd/member","force-new-cluster":false,"heartbeat-interval":"500ms","election-timeout":"5s","initial-election-tick-advance":true,"snapshot-count":100000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["http://localhost:2380"],"listen-peer-urls":["https://192.168.178.3:2380"],"advertise-client-urls":["https://192.168.178.3:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.178.3:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"k3s-server-1-fbcb6cff=https://192.168.178.2:2380,k3s-server-2-fd291c2b=https://192.168.178.3:2380","initial-cluster-state":"existing","initial-cluster-token":"etcd-cluster","quota-size-bytes":2147483648,"pre-vote":false,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","auto-compaction-mode":"","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":""}
Jun 30 08:51:04 k3s-server-2 k3s[900]: {"level":"info","ts":"2021-06-30T08:51:04.675+0200","caller":"embed/etcd.go:127","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.178.3:2379"]}
Jun 30 08:51:04 k3s-server-2 k3s[900]: {"level":"info","ts":"2021-06-30T08:51:04.674+0200","caller":"embed/etcd.go:468","msg":"starting with peer TLS","tls-info":"cert = /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt, key = /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key, trusted-ca = /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
Jun 30 08:51:04 k3s-server-2 k3s[900]: {"level":"info","ts":"2021-06-30T08:51:04.674+0200","caller":"embed/etcd.go:117","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.178.3:2380"]}
Jun 30 08:51:04 k3s-server-2 k3s[900]: time="2021-06-30T08:51:04.668380355+02:00" level=info msg="Starting etcd for cluster [k3s-server-1-fbcb6cff=https://192.168.178.2:2380 k3s-server-2-fd291c2b=https://192.168.178.3:2380]"
Jun 30 08:51:04 k3s-server-2 k3s[900]: time="2021-06-30T08:51:04.668281308+02:00" level=error msg="Failed to get member list from etcd cluster. Will assume this member is already added"
Jun 30 08:51:04 k3s-server-2 k3s[900]: {"level":"warn","ts":"2021-06-30T08:51:04.667+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"endpoint://client-3b3b29ce-89c8-41a5-821d-1f834983e3c8/192.168.178.2:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
Jun 30 08:50:44 k3s-server-2 k3s[900]: time="2021-06-30T08:50:44.658982569+02:00" level=info msg="Active TLS secret  (ver=) (count 9): map[listener.cattle.io/cn-10.20.30.3:10.20.30.3 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-192.168.178.3:192.168.178.3 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=FC618C4D9A05BED10CB6AA5D5F4D73BE4E738406]"

Expected behavior: k3s-server-2 should be able to connect to k3s-server-1 using it's private IP. It should not be using the public IP if another one is advertised. It should be a valid production use case to run a cluster inside a private subnet and only expose the necessary ports, not the control plane communication ports.

Actual behavior: k3s-server-2 attempts to connect to k3s-server-1 using it's public IP which fails since the firewall will block connection requests on the required ports. The issue seems to be related to https://github.com/k3s-io/k3s/pull/2448.

Additional context / logs: k3s-server-1 logs k3s-server-2 logs

brandond commented 3 years ago

What we generally refer to as the node external address is the public IP address that the node is accessible by via NAT, when that IP address is not directly bound to an interface on the node. Your configuration, where all the nodes have multiple interfaces, all of them with private IP addresses, doesn't appear have what would commonly be referred to or configured as an external IP.

Have you tried just setting --node-ip to the 10.20.30.x address? I think that is what you want, as opposed to just setting the advertised address.

mway-niels commented 3 years ago

Hi @brandond,

it seems like the issue can be resolved by adding both --node-ip $NODE_PRIVATE_IP and --node-external-ip $NODE_PUBLIC_IP to the server parameters. (Might work with just --node-ip but I didn't test it.) The etcd cluster is reporting the correct IP addresses.

Unfortunately I still can't bring up the secondary server because of the following error:

Jul 01 10:30:37 k3s-server-2 k3s[918]: {"level":"warn","ts":"2021-07-01T10:30:37.241+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 01 10:30:36 k3s-server-2 k3s[918]: {"level":"warn","ts":"2021-07-01T10:30:36.292+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 01 10:30:36 k3s-server-2 k3s[918]: time="2021-07-01T10:30:36.291702251+02:00" level=info msg="Running kube-apiserver --advertise-address=192.168.178.3 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Jul 01 10:30:36 k3s-server-2 k3s[918]: {"level":"info","ts":"2021-07-01T10:30:36.290+0200","caller":"embed/etcd.go:367","msg":"closed etcd server","name":"k3s-server-2-9382e07d","data-dir":"/var/lib/rancher/k3s/server/db/etcd","advertise-peer-urls":["http://localhost:2380"],"advertise-client-urls":["https://10.23.1.2:2379"]}
Jul 01 10:30:36 k3s-server-2 k3s[918]: {"level":"info","ts":"2021-07-01T10:30:36.290+0200","caller":"embed/etcd.go:363","msg":"closing etcd server","name":"k3s-server-2-9382e07d","data-dir":"/var/lib/rancher/k3s/server/db/etcd","advertise-peer-urls":["http://localhost:2380"],"advertise-client-urls":["https://10.23.1.2:2379"]}
Jul 01 10:30:36 k3s-server-2 k3s[918]: {"level":"warn","ts":"2021-07-01T10:30:36.289+0200","caller":"etcdserver/cluster_util.go:76","msg":"failed to get cluster response","address":"https://10.23.1.3:2380/members","error":"Get \"https://10.23.1.3:2380/members\": EOF"}
Jul 01 10:30:36 k3s-server-2 k3s[918]: {"level":"info","ts":"2021-07-01T10:30:36.275+0200","caller":"etcdserver/backend.go:80","msg":"opened backend db","path":"/var/lib/rancher/k3s/server/db/etcd/member/snap/db","took":"9.268993ms"}
Jul 01 10:30:36 k3s-server-2 k3s[918]: {"level":"info","ts":"2021-07-01T10:30:36.264+0200","caller":"embed/etcd.go:302","msg":"starting an etcd server","etcd-version":"3.4.13","git-sha":"Not provided (use ./build instead of go build)","go-version":"go1.16.4","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":false,"name":"k3s-server-2-9382e07d","data-dir":"/var/lib/rancher/k3s/server/db/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/rancher/k3s/server/db/etcd/member","force-new-cluster":false,"heartbeat-interval":"500ms","election-timeout":"5s","initial-election-tick-advance":true,"snapshot-count":100000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["http://localhost:2380"],"listen-peer-urls":["https://10.23.1.2:2380"],"advertise-client-urls":["https://10.23.1.2:2379"],"listen-client-urls":["https://10.23.1.2:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"k3s-server-1-defb73e8=https://10.23.1.3:2380,k3s-server-2-9382e07d=https://10.23.1.2:2380","initial-cluster-state":"existing","initial-cluster-token":"etcd-cluster","quota-size-bytes":2147483648,"pre-vote":false,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","auto-compaction-mode":"","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":""}
Jul 01 10:30:36 k3s-server-2 k3s[918]: {"level":"info","ts":"2021-07-01T10:30:36.264+0200","caller":"embed/etcd.go:127","msg":"configuring client listeners","listen-client-urls":["https://10.23.1.2:2379","https://127.0.0.1:2379"]}
Jul 01 10:30:36 k3s-server-2 k3s[918]: {"level":"info","ts":"2021-07-01T10:30:36.263+0200","caller":"embed/etcd.go:468","msg":"starting with peer TLS","tls-info":"cert = /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt, key = /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key, trusted-ca = /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
Jul 01 10:30:36 k3s-server-2 k3s[918]: {"level":"info","ts":"2021-07-01T10:30:36.263+0200","caller":"embed/etcd.go:117","msg":"configuring peer listeners","listen-peer-urls":["https://10.23.1.2:2380"]}
Jul 01 10:30:36 k3s-server-2 k3s[918]: time="2021-07-01T10:30:36.261459924+02:00" level=info msg="Starting etcd for cluster [k3s-server-1-defb73e8=https://10.23.1.3:2380 k3s-server-2-9382e07d=https://10.23.1.2:2380]"
Jul 01 10:30:36 k3s-server-2 k3s[918]: time="2021-07-01T10:30:36.255429283+02:00" level=info msg="Adding https://10.23.1.2:2380 to etcd cluster [k3s-server-1-defb73e8=https://10.23.1.3:2380]"

This part seems to be the issue: "msg":"failed to get cluster response","address":"https://10.23.1.3:2380/members","error":"Get \"https://10.23.1.3:2380/members\": EOF"} Using telnet 10.23.1.3 2380 and telnet 10.23.1.3 2379 it looks like a connection can be established.

$ telnet 10.23.1.3 2380
Trying 10.23.1.3...
Connected to 10.23.1.3.
Escape character is '^]'.
^CConnection closed by foreign host.

Would you happen to know if this issue is caused by some sort of misconfiguration? (I know this might not be related to the initial problem, feel free to close to original issue.)

curl output:

$ curl https://10.23.1.2:2380/members
curl: (60) SSL certificate problem: self signed certificate in certificate chain
More details here: https://curl.haxx.se/docs/sslcerts.html

curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
$ curl --insecure https://10.23.1.2:2380/members
curl: (56) OpenSSL SSL_read: error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad certificate, errno 0

(Please excuse the IP address mismatch, I reconfigured the network between tests that's why they don't match.)

k3s-server-1 logs when etcd on k3s-server-2 is attempting to connect:

Jul 01 11:37:37 k3s-server-1 k3s[949]: {"level":"warn","ts":"2021-07-01T11:37:37.996+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"546780fe37fbd460","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 01 11:37:37 k3s-server-1 k3s[949]: {"level":"warn","ts":"2021-07-01T11:37:37.996+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"546780fe37fbd460","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 01 11:37:36 k3s-server-1 k3s[949]: {"level":"warn","ts":"2021-07-01T11:37:36.466+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"546780fe37fbd460","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 01 11:37:36 k3s-server-1 k3s[949]: {"level":"warn","ts":"2021-07-01T11:37:36.465+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"546780fe37fbd460","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 01 11:37:33 k3s-server-1 k3s[949]: {"level":"warn","ts":"2021-07-01T11:37:33.992+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"546780fe37fbd460","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 01 11:37:33 k3s-server-1 k3s[949]: {"level":"warn","ts":"2021-07-01T11:37:33.992+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"546780fe37fbd460","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 01 11:37:31 k3s-server-1 k3s[949]: {"level":"warn","ts":"2021-07-01T11:37:31.466+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"546780fe37fbd460","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 01 11:37:31 k3s-server-1 k3s[949]: {"level":"warn","ts":"2021-07-01T11:37:31.465+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"546780fe37fbd460","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 01 11:37:29 k3s-server-1 k3s[949]: {"level":"warn","ts":"2021-07-01T11:37:29.987+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"546780fe37fbd460","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 01 11:37:29 k3s-server-1 k3s[949]: {"level":"warn","ts":"2021-07-01T11:37:29.987+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"546780fe37fbd460","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 01 11:37:26 k3s-server-1 k3s[949]: {"level":"warn","ts":"2021-07-01T11:37:26.465+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"546780fe37fbd460","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 01 11:37:26 k3s-server-1 k3s[949]: {"level":"warn","ts":"2021-07-01T11:37:26.464+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"546780fe37fbd460","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 01 11:37:25 k3s-server-1 k3s[949]: {"level":"warn","ts":"2021-07-01T11:37:25.983+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"546780fe37fbd460","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 01 11:37:25 k3s-server-1 k3s[949]: {"level":"warn","ts":"2021-07-01T11:37:25.982+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"546780fe37fbd460","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 01 11:37:23 k3s-server-1 k3s[949]: time="2021-07-01T11:37:23.124815277+02:00" level=warning msg="Learner  stalled at RaftAppliedIndex=0 for 4m15.124808401s"
Jul 01 11:37:23 k3s-server-1 k3s[949]: {"level":"warn","ts":"2021-07-01T11:37:23.124+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"endpoint://client-94dbe396-7145-4fcd-ad83-563f29f094d8/127.0.0.1:2379","attempt":0,"error":"rpc error: code = FailedPrecondition desc = etcdserver: can only promote a learner member which is in sync with leader"}
brandond commented 3 years ago

Can you attach the complete logs from both nodes?

mway-niels commented 3 years ago

Hello @brandond,

I've attached the logfiles (journalctl -r -u k3s). k3s-server-1

-- Logs begin at Thu 2021-07-01 12:28:12 CEST, end at Fri 2021-07-02 09:00:21 CEST. --
Jul 02 09:00:21 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T09:00:21.214+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 09:00:21 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T09:00:21.214+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 09:00:17 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T09:00:17.462+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 09:00:17 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T09:00:17.462+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 09:00:17 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T09:00:17.210+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 09:00:17 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T09:00:17.210+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 09:00:13 k3s-server-1 k3s[11184]: time="2021-07-02T09:00:13.939814963+02:00" level=warning msg="Learner  stalled at RaftAppliedIndex=0 for 3m15.939805876s"
Jul 02 09:00:13 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T09:00:13.939+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"endpoint://client-ff3770da-96e7-4a2c-91ce-6cdb55b224a9/127.0.0.1:2379","attempt":0,"error":"rpc error: code = FailedPrecondition desc = etcdserver: can only promote a learner member which is in sync with leader"}
Jul 02 09:00:13 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T09:00:13.206+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 09:00:13 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T09:00:13.206+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 09:00:12 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T09:00:12.460+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 09:00:12 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T09:00:12.460+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 09:00:09 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T09:00:09.202+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 09:00:09 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T09:00:09.202+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 09:00:07 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T09:00:07.458+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 09:00:07 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T09:00:07.458+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 09:00:05 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T09:00:05.197+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 09:00:05 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T09:00:05.197+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 09:00:02 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T09:00:02.456+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 09:00:02 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T09:00:02.455+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 09:00:01 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T09:00:01.193+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 09:00:01 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T09:00:01.193+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:58 k3s-server-1 k3s[11184]: time="2021-07-02T08:59:58.933292337+02:00" level=warning msg="Learner  stalled at RaftAppliedIndex=0 for 3m0.933284399s"
Jul 02 08:59:58 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:58.933+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"endpoint://client-ff3770da-96e7-4a2c-91ce-6cdb55b224a9/127.0.0.1:2379","attempt":0,"error":"rpc error: code = FailedPrecondition desc = etcdserver: can only promote a learner member which is in sync with leader"}
Jul 02 08:59:57 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:57.455+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:57 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:57.455+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:57 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:57.189+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:57 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:57.189+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:53 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:53.184+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:53 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:53.184+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:52 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:52.454+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:52 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:52.454+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:49 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:49.179+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:49 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:49.179+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:47 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:47.452+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:47 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:47.452+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:45 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:45.175+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:45 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:45.175+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:43 k3s-server-1 k3s[11184]: time="2021-07-02T08:59:43.933384525+02:00" level=warning msg="Learner  stalled at RaftAppliedIndex=0 for 2m45.933377885s"
Jul 02 08:59:43 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:43.933+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"endpoint://client-ff3770da-96e7-4a2c-91ce-6cdb55b224a9/127.0.0.1:2379","attempt":0,"error":"rpc error: code = FailedPrecondition desc = etcdserver: can only promote a learner member which is in sync with leader"}
Jul 02 08:59:42 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:42.452+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:42 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:42.452+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:41 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:41.171+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:41 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:41.171+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:37 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:37.451+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:37 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:37.451+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:37 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:37.165+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:37 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:37.165+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:33 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:33.160+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:33 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:33.160+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:32 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:32.450+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:32 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:32.439+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:29 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:29.156+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:29 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:29.156+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:28 k3s-server-1 k3s[11184]: time="2021-07-02T08:59:28.933795348+02:00" level=warning msg="Learner  stalled at RaftAppliedIndex=0 for 2m30.933774301s"
Jul 02 08:59:28 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:28.933+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"endpoint://client-ff3770da-96e7-4a2c-91ce-6cdb55b224a9/127.0.0.1:2379","attempt":0,"error":"rpc error: code = FailedPrecondition desc = etcdserver: can only promote a learner member which is in sync with leader"}
Jul 02 08:59:27 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:27.435+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:27 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:27.435+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:25 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:25.152+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:25 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:25.152+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:22 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:22.435+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:22 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:22.434+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:21 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:21.147+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:21 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:21.147+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:17 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:17.431+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:17 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:17.431+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:17 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:17.143+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:17 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:17.143+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:13 k3s-server-1 k3s[11184]: time="2021-07-02T08:59:13.933611868+02:00" level=warning msg="Learner  stalled at RaftAppliedIndex=0 for 2m15.933603816s"
Jul 02 08:59:13 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:13.933+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"endpoint://client-ff3770da-96e7-4a2c-91ce-6cdb55b224a9/127.0.0.1:2379","attempt":0,"error":"rpc error: code = FailedPrecondition desc = etcdserver: can only promote a learner member which is in sync with leader"}
Jul 02 08:59:13 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:13.138+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:13 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:13.138+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:12 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:12.430+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:12 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:12.430+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:09 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:09.133+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:09 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:09.132+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:07 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:07.429+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:07 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:07.426+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:05 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:05.128+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:05 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:05.127+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:02 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:02.425+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:02 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:02.425+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:01 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:01.123+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:59:01 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:59:01.122+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:58 k3s-server-1 k3s[11184]: time="2021-07-02T08:58:58.933742857+02:00" level=warning msg="Learner  stalled at RaftAppliedIndex=0 for 2m0.933736234s"
Jul 02 08:58:58 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:58.933+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"endpoint://client-ff3770da-96e7-4a2c-91ce-6cdb55b224a9/127.0.0.1:2379","attempt":0,"error":"rpc error: code = FailedPrecondition desc = etcdserver: can only promote a learner member which is in sync with leader"}
Jul 02 08:58:57 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:57.425+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:57 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:57.425+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:57 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:57.118+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:57 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:57.118+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:53 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:53.113+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:53 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:53.113+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:52 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:52.425+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:52 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:52.424+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:49 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:49.109+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:49 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:49.109+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:47 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:47.424+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:47 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:47.423+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:45 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:45.105+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:45 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:45.105+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:43 k3s-server-1 k3s[11184]: time="2021-07-02T08:58:43.938562798+02:00" level=warning msg="Learner  stalled at RaftAppliedIndex=0 for 1m45.938551538s"
Jul 02 08:58:43 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:43.938+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"endpoint://client-ff3770da-96e7-4a2c-91ce-6cdb55b224a9/127.0.0.1:2379","attempt":0,"error":"rpc error: code = FailedPrecondition desc = etcdserver: can only promote a learner member which is in sync with leader"}
Jul 02 08:58:42 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:42.422+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:42 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:42.422+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:41 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:41.101+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:41 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:41.101+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:37 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:37.421+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:37 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:37.421+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:37 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:37.096+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:37 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:37.093+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:33 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:33.089+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:33 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:33.088+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:32 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:32.420+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:32 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:32.419+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:29 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:29.085+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:29 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:29.085+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:28 k3s-server-1 k3s[11184]: time="2021-07-02T08:58:28.935609364+02:00" level=warning msg="Learner  stalled at RaftAppliedIndex=0 for 1m30.935602088s"
Jul 02 08:58:28 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:28.935+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"endpoint://client-ff3770da-96e7-4a2c-91ce-6cdb55b224a9/127.0.0.1:2379","attempt":0,"error":"rpc error: code = FailedPrecondition desc = etcdserver: can only promote a learner member which is in sync with leader"}
Jul 02 08:58:27 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:27.418+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:27 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:27.418+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:25 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:25.081+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:25 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:25.081+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:22 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:22.418+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:22 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:22.418+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:21 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:21.076+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:21 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:21.073+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:17 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:17.416+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:17 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:17.416+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:17 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:17.069+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:17 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:17.068+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:13 k3s-server-1 k3s[11184]: time="2021-07-02T08:58:13.937645902+02:00" level=warning msg="Learner  stalled at RaftAppliedIndex=0 for 1m15.937635946s"
Jul 02 08:58:13 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:13.937+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"endpoint://client-ff3770da-96e7-4a2c-91ce-6cdb55b224a9/127.0.0.1:2379","attempt":0,"error":"rpc error: code = FailedPrecondition desc = etcdserver: can only promote a learner member which is in sync with leader"}
Jul 02 08:58:13 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:13.064+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:13 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:13.064+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:12 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:12.415+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:12 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:12.414+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:09 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:09.060+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:09 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:09.060+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:07 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:07.414+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:07 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:07.413+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:05 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:05.056+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:05 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:05.056+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:02 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:02.413+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:02 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:02.412+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:01 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:01.052+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:58:01 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:58:01.051+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:58 k3s-server-1 k3s[11184]: time="2021-07-02T08:57:58.941094310+02:00" level=warning msg="Learner  stalled at RaftAppliedIndex=0 for 1m0.941076449s"
Jul 02 08:57:58 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:58.939+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"endpoint://client-ff3770da-96e7-4a2c-91ce-6cdb55b224a9/127.0.0.1:2379","attempt":0,"error":"rpc error: code = FailedPrecondition desc = etcdserver: can only promote a learner member which is in sync with leader"}
Jul 02 08:57:57 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:57.412+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:57 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:57.412+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:57 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:57.047+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:57 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:57.047+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:53 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:53.043+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:53 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:53.043+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:52 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:52.411+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:52 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:52.411+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:49 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:49.039+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:49 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:49.039+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:47 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:47.410+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:47 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:47.410+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:45 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:45.035+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:45 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:45.035+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:43 k3s-server-1 k3s[11184]: time="2021-07-02T08:57:43.937675312+02:00" level=warning msg="Learner  stalled at RaftAppliedIndex=0 for 45.937664876s"
Jul 02 08:57:43 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:43.937+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"endpoint://client-ff3770da-96e7-4a2c-91ce-6cdb55b224a9/127.0.0.1:2379","attempt":0,"error":"rpc error: code = FailedPrecondition desc = etcdserver: can only promote a learner member which is in sync with leader"}
Jul 02 08:57:42 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:42.409+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:42 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:42.408+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:41 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:41.030+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:41 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:41.030+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:37 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:37.407+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:37 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:37.407+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:37 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:37.026+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:37 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:37.026+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:33 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:33.021+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:33 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:33.021+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:32 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:32.402+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:32 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:32.402+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:29 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:29.017+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:29 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:29.017+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:28 k3s-server-1 k3s[11184]: time="2021-07-02T08:57:28.940074210+02:00" level=warning msg="Learner  stalled at RaftAppliedIndex=0 for 30.940055829s"
Jul 02 08:57:28 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:28.939+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"endpoint://client-ff3770da-96e7-4a2c-91ce-6cdb55b224a9/127.0.0.1:2379","attempt":0,"error":"rpc error: code = FailedPrecondition desc = etcdserver: can only promote a learner member which is in sync with leader"}
Jul 02 08:57:27 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:27.401+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:27 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:27.401+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:25 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:25.012+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:25 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:25.012+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:22 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:22.401+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:22 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:22.400+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:21 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:21.008+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:21 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:21.008+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:17 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:17.399+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:17 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:17.399+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:17 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:17.004+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:17 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:17.004+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:13 k3s-server-1 k3s[11184]: time="2021-07-02T08:57:13.934900326+02:00" level=warning msg="Learner  stalled at RaftAppliedIndex=0 for 15.934893032s"
Jul 02 08:57:13 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:13.934+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"endpoint://client-ff3770da-96e7-4a2c-91ce-6cdb55b224a9/127.0.0.1:2379","attempt":0,"error":"rpc error: code = FailedPrecondition desc = etcdserver: can only promote a learner member which is in sync with leader"}
Jul 02 08:57:13 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:13.000+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:13 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:13.000+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:12 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:12.397+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:12 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:12.397+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:08 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:08.996+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:08 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:08.996+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:07 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:07.395+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:07 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:07.395+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:04 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:04.992+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:04 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:04.991+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:02 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:02.393+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:02 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:02.393+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:00 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:00.988+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:57:00 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:57:00.988+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:56:58 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:56:58.939+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"endpoint://client-ff3770da-96e7-4a2c-91ce-6cdb55b224a9/127.0.0.1:2379","attempt":0,"error":"rpc error: code = FailedPrecondition desc = etcdserver: can only promote a learner member which is in sync with leader"}
Jul 02 08:56:57 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:56:57.392+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:56:57 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:56:57.392+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:56:56 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:56:56.983+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:56:56 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:56:56.983+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:56:52 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:56:52.980+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:56:52 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:56:52.979+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:56:52 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:56:52.391+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:56:52 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:56:52.391+0200","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4444bc45bc5e52f9","rtt":"0s","error":"dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:56:48 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:56:48.975+0200","caller":"etcdserver/cluster_util.go:168","msg":"failed to get version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:56:48 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:56:48.974+0200","caller":"etcdserver/cluster_util.go:315","msg":"failed to reach the peer URL","address":"https://10.23.1.3:2380/version","remote-member-id":"4444bc45bc5e52f9","error":"Get \"https://10.23.1.3:2380/version\": dial tcp 10.23.1.3:2380: connect: connection refused"}
Jul 02 08:56:47 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:56:47.431+0200","caller":"embed/config_logging.go:270","msg":"rejected connection","remote-addr":"10.23.1.3:53078","server-name":"","ip-addresses":["127.0.0.1","192.168.178.3","192.168.178.3","10.43.0.1"],"dns-names":["localhost"],"error":"tls: \"10.23.1.3\" does not match any of DNSNames [\"localhost\"] (lookup 10.23.1.3: Name does not resolve)"}
Jul 02 08:56:47 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:56:47.391+0200","caller":"rafthttp/stream.go:406","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1ad553bddb402590","remote-peer-id":"4444bc45bc5e52f9"}
Jul 02 08:56:47 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:56:47.390+0200","caller":"rafthttp/stream.go:406","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1ad553bddb402590","remote-peer-id":"4444bc45bc5e52f9"}
Jul 02 08:56:47 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:56:47.390+0200","caller":"etcdserver/server.go:1967","msg":"applied a configuration change through raft","local-member-id":"1ad553bddb402590","raft-conf-change":"ConfChangeAddLearnerNode","raft-conf-change-node-id":"4444bc45bc5e52f9"}
Jul 02 08:56:47 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:56:47.390+0200","caller":"rafthttp/transport.go:327","msg":"added remote peer","local-member-id":"1ad553bddb402590","remote-peer-id":"4444bc45bc5e52f9","remote-peer-urls":["https://10.23.1.3:2380"]}
Jul 02 08:56:47 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:56:47.390+0200","caller":"rafthttp/peer.go:134","msg":"started remote peer","remote-peer-id":"4444bc45bc5e52f9"}
Jul 02 08:56:47 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:56:47.389+0200","caller":"rafthttp/stream.go:166","msg":"started stream writer with remote peer","local-member-id":"1ad553bddb402590","remote-peer-id":"4444bc45bc5e52f9"}
Jul 02 08:56:47 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:56:47.389+0200","caller":"rafthttp/stream.go:166","msg":"started stream writer with remote peer","local-member-id":"1ad553bddb402590","remote-peer-id":"4444bc45bc5e52f9"}
Jul 02 08:56:47 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:56:47.388+0200","caller":"rafthttp/pipeline.go:71","msg":"started HTTP pipelining with remote peer","local-member-id":"1ad553bddb402590","remote-peer-id":"4444bc45bc5e52f9"}
Jul 02 08:56:47 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:56:47.388+0200","caller":"rafthttp/peer.go:128","msg":"starting remote peer","remote-peer-id":"4444bc45bc5e52f9"}
Jul 02 08:56:47 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:56:47.388+0200","caller":"membership/cluster.go:392","msg":"added member","cluster-id":"58482255ea06577","local-member-id":"1ad553bddb402590","added-peer-id":"4444bc45bc5e52f9","added-peer-peer-urls":["https://10.23.1.3:2380"]}
Jul 02 08:56:47 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:56:47.388+0200","caller":"raft/raft.go:1530","msg":"1ad553bddb402590 switched to configuration voters=(1933543689917834640) learners=(4919263700694487801)"}
Jul 02 08:56:47 k3s-server-1 k3s[11184]: time="2021-07-02T08:56:47.264004682+02:00" level=info msg="Cluster-Http-Server 2021/07/02 08:56:47 http: TLS handshake error from 10.23.1.7:22254: remote error: tls: bad certificate"
Jul 02 08:54:56 k3s-server-1 k3s[11184]: I0702 08:54:56.571456   11184 iptables.go:160] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully
Jul 02 08:54:56 k3s-server-1 k3s[11184]: I0702 08:54:56.555408   11184 iptables.go:160] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/24 -j RETURN
Jul 02 08:54:56 k3s-server-1 k3s[11184]: I0702 08:54:56.539447   11184 iptables.go:160] Adding iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
Jul 02 08:54:56 k3s-server-1 k3s[11184]: I0702 08:54:56.504201   11184 iptables.go:160] Adding iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
Jul 02 08:54:56 k3s-server-1 k3s[11184]: I0702 08:54:56.503542   11184 iptables.go:160] Adding iptables rule: -d 10.42.0.0/16 -j ACCEPT
Jul 02 08:54:56 k3s-server-1 k3s[11184]: I0702 08:54:56.481417   11184 iptables.go:172] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully
Jul 02 08:54:56 k3s-server-1 k3s[11184]: I0702 08:54:56.480132   11184 iptables.go:160] Adding iptables rule: -s 10.42.0.0/16 -j ACCEPT
Jul 02 08:54:56 k3s-server-1 k3s[11184]: I0702 08:54:56.478830   11184 iptables.go:172] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/24 -j RETURN
Jul 02 08:54:56 k3s-server-1 k3s[11184]: I0702 08:54:56.477451   11184 iptables.go:172] Deleting iptables rule: -d 10.42.0.0/16 -j ACCEPT
Jul 02 08:54:56 k3s-server-1 k3s[11184]: I0702 08:54:56.476086   11184 iptables.go:172] Deleting iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
Jul 02 08:54:56 k3s-server-1 k3s[11184]: I0702 08:54:56.474637   11184 iptables.go:172] Deleting iptables rule: -s 10.42.0.0/16 -j ACCEPT
Jul 02 08:54:56 k3s-server-1 k3s[11184]: I0702 08:54:56.474600   11184 iptables.go:148] Some iptables rules are missing; deleting and recreating rules
Jul 02 08:54:56 k3s-server-1 k3s[11184]: I0702 08:54:56.473110   11184 iptables.go:172] Deleting iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
Jul 02 08:54:56 k3s-server-1 k3s[11184]: I0702 08:54:56.473067   11184 iptables.go:148] Some iptables rules are missing; deleting and recreating rules
Jul 02 08:54:56 k3s-server-1 k3s[11184]: I0702 08:54:56.469478   11184 vxlan_network.go:59] watching for new subnet leases
Jul 02 08:54:56 k3s-server-1 k3s[11184]: I0702 08:54:56.469389   11184 flannel.go:82] Running backend.
Jul 02 08:54:56 k3s-server-1 k3s[11184]: I0702 08:54:56.469183   11184 flannel.go:78] Wrote subnet file to /run/flannel/subnet.env
Jul 02 08:54:56 k3s-server-1 k3s[11184]: I0702 08:54:56.416190   11184 vxlan.go:123] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
Jul 02 08:54:56 k3s-server-1 k3s[11184]: I0702 08:54:56.416067   11184 kube.go:123] Node controller sync successful
Jul 02 08:54:56 k3s-server-1 k3s[11184]: I0702 08:54:56.362052   11184 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Jul 02 08:54:56 k3s-server-1 k3s[11184]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
Jul 02 08:54:56 k3s-server-1 k3s[11184]: E0702 08:54:56.362031   11184 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
Jul 02 08:54:56 k3s-server-1 k3s[11184]: W0702 08:54:56.361930   11184 handler_proxy.go:102] no RequestInfo found in the context
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.879668   11184 network_policy_controller.go:145] Starting network policy controller full sync goroutine
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.804122   11184 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwghv\" (UniqueName: \"kubernetes.io/projected/79be3e24-9629-4f67-8ce2-8c02c69c9f18-kube-api-access-rwghv\") pod \"coredns-7448499f4d-z4ccb\" (UID: \"79be3e24-9629-4f67-8ce2-8c02c69c9f18\") "
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.803998   11184 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79be3e24-9629-4f67-8ce2-8c02c69c9f18-config-volume\") pod \"coredns-7448499f4d-z4ccb\" (UID: \"79be3e24-9629-4f67-8ce2-8c02c69c9f18\") "
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.803862   11184 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7g6m\" (UniqueName: \"kubernetes.io/projected/6f0c09c3-5275-4ef0-ac00-7f0ab706cf76-kube-api-access-t7g6m\") pod \"local-path-provisioner-5ff76fc89d-kvnr4\" (UID: \"6f0c09c3-5275-4ef0-ac00-7f0ab706cf76\") "
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.803573   11184 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f0c09c3-5275-4ef0-ac00-7f0ab706cf76-config-volume\") pod \"local-path-provisioner-5ff76fc89d-kvnr4\" (UID: \"6f0c09c3-5275-4ef0-ac00-7f0ab706cf76\") "
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.803524   11184 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l44j\" (UniqueName: \"kubernetes.io/projected/103c28c9-7e85-4575-bae9-20b9f7c05f65-kube-api-access-7l44j\") pod \"metrics-server-86cbb8457f-vtj56\" (UID: \"103c28c9-7e85-4575-bae9-20b9f7c05f65\") "
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.803403   11184 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/103c28c9-7e85-4575-bae9-20b9f7c05f65-tmp-dir\") pod \"metrics-server-86cbb8457f-vtj56\" (UID: \"103c28c9-7e85-4575-bae9-20b9f7c05f65\") "
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.676643   11184 topology_manager.go:187] "Topology Admit Handler"
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.676465   11184 topology_manager.go:187] "Topology Admit Handler"
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.675863   11184 topology_manager.go:187] "Topology Admit Handler"
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.672175   11184 controller.go:611] quota admission added evaluator for: events.events.k8s.io
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.628917   11184 shared_informer.go:247] Caches are synced for garbage collector
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.626214   11184 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.625669   11184 shared_informer.go:247] Caches are synced for garbage collector
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.620014   11184 event.go:291] "Event occurred" object="kube-system/local-path-provisioner-5ff76fc89d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: local-path-provisioner-5ff76fc89d-kvnr4"
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.619981   11184 event.go:291] "Event occurred" object="kube-system/metrics-server-86cbb8457f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-86cbb8457f-vtj56"
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.619897   11184 event.go:291] "Event occurred" object="kube-system/coredns-7448499f4d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-7448499f4d-z4ccb"
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.584420   11184 network_policy_controller.go:138] Starting network policy controller
Jul 02 08:54:55 k3s-server-1 systemd[1]: Started Lightweight Kubernetes.
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.476148   11184 event.go:291] "Event occurred" object="kube-system/local-path-provisioner" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set local-path-provisioner-5ff76fc89d to 1"
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.476132   11184 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-86cbb8457f to 1"
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.476090   11184 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-7448499f4d to 1"
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.474345   11184 event.go:291] "Event occurred" object="k3s-server-1" kind="Node" apiVersion="v1" type="Normal" reason="Synced" message="Node synced successfully"
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.473804   11184 node_controller.go:454] Successfully initialized node k3s-server-1 with cloud provider
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.459911   11184 controller.go:611] quota admission added evaluator for: replicasets.apps
Jul 02 08:54:55 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:55.436938458+02:00" level=info msg="labels have been set successfully on node: k3s-server-1"
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.435302   11184 node_controller.go:390] Initializing node k3s-server-1 with cloud provider
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.415754   11184 kube.go:299] Starting kube subnet manager
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.415690   11184 kube.go:116] Waiting 10m0s for node controller to sync
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.410878   11184 flannel.go:105] Using interface with name eth0 and address 192.168.178.2
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.409973   11184 flannel.go:92] Determining IP address of default interface
Jul 02 08:54:55 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:55.407083216+02:00" level=info msg="Node CIDR assigned for: k3s-server-1"
Jul 02 08:54:55 k3s-server-1 k3s[11184]: E0702 08:54:55.255278   11184 node_controller.go:212] error syncing 'k3s-server-1': failed to get node modifiers from cloud provider: failed to find kubelet node IP from cloud provider, requeuing
Jul 02 08:54:55 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:55.255139624+02:00" level=info msg="Couldn't find node hostname annotation or label on node k3s-server-1"
Jul 02 08:54:55 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:55.254956664+02:00" level=info msg="Couldn't find node internal ip annotation or label on node k3s-server-1"
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.254642   11184 node_controller.go:390] Initializing node k3s-server-1 with cloud provider
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.169685   11184 shared_informer.go:247] Caches are synced for attach detach
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.112600   11184 shared_informer.go:247] Caches are synced for resource quota
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.110470   11184 shared_informer.go:247] Caches are synced for resource quota
Jul 02 08:54:55 k3s-server-1 k3s[11184]: E0702 08:54:55.005858   11184 node_controller.go:212] error syncing 'k3s-server-1': failed to get node modifiers from cloud provider: failed to find kubelet node IP from cloud provider, requeuing
Jul 02 08:54:55 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:55.005774678+02:00" level=info msg="Couldn't find node hostname annotation or label on node k3s-server-1"
Jul 02 08:54:55 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:55.005490021+02:00" level=info msg="Couldn't find node internal ip annotation or label on node k3s-server-1"
Jul 02 08:54:55 k3s-server-1 k3s[11184]: I0702 08:54:55.005388   11184 node_controller.go:390] Initializing node k3s-server-1 with cloud provider
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.993754   11184 shared_informer.go:247] Caches are synced for endpoint_slice
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.987194   11184 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.984146   11184 event.go:291] "Event occurred" object="k3s-server-1" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node k3s-server-1 event: Registered Node k3s-server-1 in Controller"
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.983878   11184 taint_manager.go:187] "Starting NoExecuteTaintManager"
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.983454   11184 node_lifecycle_controller.go:1214] Controller detected that zone  is now in state Normal.
Jul 02 08:54:54 k3s-server-1 k3s[11184]: W0702 08:54:54.983418   11184 node_lifecycle_controller.go:1013] Missing timestamp for Node k3s-server-1. Assuming now as a timestamp.
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.983362   11184 node_lifecycle_controller.go:1398] Initializing eviction metric for zone:
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.983290   11184 shared_informer.go:247] Caches are synced for taint
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.976030   11184 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.975156   11184 kuberuntime_manager.go:1044] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.956052   11184 shared_informer.go:247] Caches are synced for persistent volume
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.945209   11184 range_allocator.go:373] Set node k3s-server-1 PodCIDR to [10.42.0.0/24]
Jul 02 08:54:54 k3s-server-1 k3s[11184]: E0702 08:54:54.942780   11184 node_controller.go:212] error syncing 'k3s-server-1': failed to get node modifiers from cloud provider: failed to find kubelet node IP from cloud provider, requeuing
Jul 02 08:54:54 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:54.942407632+02:00" level=info msg="Couldn't find node hostname annotation or label on node k3s-server-1"
Jul 02 08:54:54 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:54.941942593+02:00" level=info msg="Couldn't find node internal ip annotation or label on node k3s-server-1"
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.941338   11184 node_controller.go:390] Initializing node k3s-server-1 with cloud provider
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.933070   11184 shared_informer.go:247] Caches are synced for ephemeral
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.930303   11184 shared_informer.go:247] Caches are synced for expand
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.928610   11184 shared_informer.go:240] Waiting for caches to sync for garbage collector
Jul 02 08:54:54 k3s-server-1 k3s[11184]: E0702 08:54:54.924209   11184 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
Jul 02 08:54:54 k3s-server-1 k3s[11184]: E0702 08:54:54.922708   11184 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.918041   11184 shared_informer.go:247] Caches are synced for cidrallocator
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.918036   11184 shared_informer.go:240] Waiting for caches to sync for cidrallocator
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.918031   11184 range_allocator.go:172] Starting range CIDR allocator
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.917995   11184 shared_informer.go:247] Caches are synced for node
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.915522   11184 shared_informer.go:247] Caches are synced for deployment
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.914469   11184 shared_informer.go:247] Caches are synced for stateful set
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.905814   11184 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.898610   11184 shared_informer.go:247] Caches are synced for ReplicaSet
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.898128   11184 shared_informer.go:247] Caches are synced for PVC protection
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.893274   11184 shared_informer.go:247] Caches are synced for TTL after finished
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.892914   11184 shared_informer.go:247] Caches are synced for ReplicationController
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.892156   11184 shared_informer.go:247] Caches are synced for endpoint
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.890757   11184 shared_informer.go:247] Caches are synced for HPA
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.871570   11184 shared_informer.go:247] Caches are synced for crt configmap
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.870434   11184 shared_informer.go:247] Caches are synced for TTL
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.866125   11184 disruption.go:371] Sending events to api server.
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.865875   11184 shared_informer.go:247] Caches are synced for disruption
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.862731   11184 shared_informer.go:247] Caches are synced for service account
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.859726   11184 shared_informer.go:247] Caches are synced for PV protection
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.849278   11184 shared_informer.go:247] Caches are synced for certificate-csrapproving
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.848579   11184 shared_informer.go:247] Caches are synced for namespace
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.848007   11184 shared_informer.go:247] Caches are synced for daemon sets
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.845616   11184 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.845216   11184 shared_informer.go:247] Caches are synced for job
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.844492   11184 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.843681   11184 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.843588   11184 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.842756   11184 shared_informer.go:247] Caches are synced for cronjob
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.833577   11184 shared_informer.go:247] Caches are synced for GC
Jul 02 08:54:54 k3s-server-1 k3s[11184]: W0702 08:54:54.829400   11184 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="k3s-server-1" does not exist
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.809951   11184 shared_informer.go:240] Waiting for caches to sync for resource quota
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.797624   11184 shared_informer.go:240] Waiting for caches to sync for PVC protection
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.797240   11184 pvc_protection_controller.go:110] "Starting PVC protection controller"
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.794217   11184 controllermanager.go:574] Started "pvc-protection"
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.643684   11184 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.643666   11184 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.643644   11184 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/server-ca.crt::/var/lib/rancher/k3s/server/tls/server-ca.key
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.643616   11184 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.643590   11184 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.643582   11184 certificate_controller.go:118] Starting certificate controller "csrsigning-kube-apiserver-client"
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.643549   11184 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.643542   11184 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-client"
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.643514   11184 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.643506   11184 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-serving"
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.643478   11184 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.643467   11184 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown"
Jul 02 08:54:54 k3s-server-1 k3s[11184]: W0702 08:54:54.643407   11184 controllermanager.go:553] "tokencleaner" is disabled
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.643362   11184 controllermanager.go:574] Started "csrsigning"
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.594781   11184 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.594561   11184 replica_set.go:182] Starting replicaset controller
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.594150   11184 controllermanager.go:574] Started "replicaset"
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.443890   11184 shared_informer.go:240] Waiting for caches to sync for daemon sets
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.443881   11184 daemon_controller.go:285] Starting daemon sets controller
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.443814   11184 controllermanager.go:574] Started "daemonset"
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.293236   11184 shared_informer.go:240] Waiting for caches to sync for TTL after finished
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.293227   11184 ttlafterfinished_controller.go:109] Starting TTL after finished controller
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.293120   11184 controllermanager.go:574] Started "ttl-after-finished"
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.142260   11184 shared_informer.go:240] Waiting for caches to sync for job
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.142248   11184 job_controller.go:150] Starting job controller
Jul 02 08:54:54 k3s-server-1 k3s[11184]: I0702 08:54:54.142150   11184 controllermanager.go:574] Started "job"
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.993494   11184 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.993256   11184 endpointslice_controller.go:256] Starting endpoint slice controller
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.992365   11184 controllermanager.go:574] Started "endpointslice"
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.847054   11184 cleaner.go:82] Starting CSR cleaner controller
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.840244   11184 controllermanager.go:574] Started "csrcleaner"
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.790711   11184 shared_informer.go:240] Waiting for caches to sync for HPA
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.790701   11184 horizontal.go:169] Starting HPA controller
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.790629   11184 controllermanager.go:574] Started "horizontalpodautoscaling"
Jul 02 08:54:53 k3s-server-1 k3s[11184]: E0702 08:54:53.528444   11184 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.512541   11184 resource_quota_monitor.go:304] QuotaMonitor running
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.512513   11184 shared_informer.go:240] Waiting for caches to sync for resource quota
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.512484   11184 resource_quota_controller.go:273] Starting resource quota controller
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.512110   11184 controllermanager.go:574] Started "resourcequota"
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.512069   11184 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.512016   11184 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for controllerrevisions.apps
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.511972   11184 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.511874   11184 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.511847   11184 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for cronjobs.batch
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.511819   11184 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for events.events.k8s.io
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.511762   11184 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for podtemplates
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.511731   11184 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for csistoragecapacities.storage.k8s.io
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.511687   11184 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for jobs.batch
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.511643   11184 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for statefulsets.apps
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.511604   11184 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.511576   11184 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.extensions
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.511537   11184 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.511493   11184 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for replicasets.apps
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.511459   11184 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpoints
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.511430   11184 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for helmcharts.helm.cattle.io
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.511394   11184 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.511367   11184 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.511332   11184 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.511255   11184 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for serviceaccounts
Jul 02 08:54:53 k3s-server-1 k3s[11184]: W0702 08:54:53.511168   11184 shared_informer.go:494] resyncPeriod 16h19m26.833191915s is smaller than resyncCheckPeriod 22h39m26.212902295s and the informer has already started. Changing it to 22h39m26.212902295s
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.511129   11184 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for addons.k3s.cattle.io
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.511075   11184 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for helmchartconfigs.helm.cattle.io
Jul 02 08:54:53 k3s-server-1 k3s[11184]: W0702 08:54:53.510705   11184 shared_informer.go:494] resyncPeriod 14h30m45.422650811s is smaller than resyncCheckPeriod 22h39m26.212902295s and the informer has already started. Changing it to 22h39m26.212902295s
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.510670   11184 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for limitranges
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.510623   11184 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for daemonsets.apps
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.510585   11184 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for deployments.apps
Jul 02 08:54:53 k3s-server-1 k3s[11184]: E0702 08:54:53.510473   11184 resource_quota_controller.go:162] initial discovery check failure, continuing and counting on future sync update: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
Jul 02 08:54:53 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:53.393533992+02:00" level=info msg="Waiting for node k3s-server-1 CIDR not assigned yet"
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.192728   11184 shared_informer.go:240] Waiting for caches to sync for ReplicationController
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.192524   11184 replica_set.go:182] Starting replicationcontroller controller
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.192052   11184 controllermanager.go:574] Started "replicationcontroller"
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.091457   11184 shared_informer.go:240] Waiting for caches to sync for endpoint
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.091413   11184 endpoints_controller.go:189] Starting endpoint controller
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.091073   11184 controllermanager.go:574] Started "endpoint"
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.068828   11184 shared_informer.go:240] Waiting for caches to sync for attach detach
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.068819   11184 attach_detach_controller.go:328] Starting attach detach controller
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.068679   11184 controllermanager.go:574] Started "attachdetach"
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.060521   11184 shared_informer.go:240] Waiting for caches to sync for service account
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.060355   11184 serviceaccounts_controller.go:117] Starting service account controller
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.059929   11184 controllermanager.go:574] Started "serviceaccount"
Jul 02 08:54:53 k3s-server-1 k3s[11184]: W0702 08:54:53.049097   11184 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.041665   11184 shared_informer.go:240] Waiting for caches to sync for cronjob
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.041652   11184 cronjob_controllerv2.go:125] Starting cronjob controller v2
Jul 02 08:54:53 k3s-server-1 k3s[11184]: W0702 08:54:53.041507   11184 controllermanager.go:553] "route" is disabled
Jul 02 08:54:53 k3s-server-1 k3s[11184]: W0702 08:54:53.041500   11184 controllermanager.go:553] "bootstrapsigner" is disabled
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.041455   11184 controllermanager.go:574] Started "cronjob"
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.014059   11184 shared_informer.go:240] Waiting for caches to sync for stateful set
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.013775   11184 stateful_set.go:146] Starting stateful set controller
Jul 02 08:54:53 k3s-server-1 k3s[11184]: I0702 08:54:53.013208   11184 controllermanager.go:574] Started "statefulset"
Jul 02 08:54:52 k3s-server-1 k3s[11184]: I0702 08:54:52.993891   11184 graph_builder.go:289] GraphBuilder running
Jul 02 08:54:52 k3s-server-1 k3s[11184]: I0702 08:54:52.993870   11184 shared_informer.go:240] Waiting for caches to sync for garbage collector
Jul 02 08:54:52 k3s-server-1 k3s[11184]: I0702 08:54:52.993856   11184 garbagecollector.go:142] Starting garbage collector controller
Jul 02 08:54:52 k3s-server-1 k3s[11184]: I0702 08:54:52.993579   11184 controllermanager.go:574] Started "garbagecollector"
Jul 02 08:54:52 k3s-server-1 k3s[11184]: I0702 08:54:52.969339   11184 shared_informer.go:240] Waiting for caches to sync for crt configmap
Jul 02 08:54:52 k3s-server-1 k3s[11184]: I0702 08:54:52.969033   11184 publisher.go:102] Starting root CA certificate configmap publisher
Jul 02 08:54:52 k3s-server-1 k3s[11184]: I0702 08:54:52.967986   11184 controllermanager.go:574] Started "root-ca-cert-publisher"
Jul 02 08:54:52 k3s-server-1 k3s[11184]: I0702 08:54:52.951554   11184 shared_informer.go:240] Waiting for caches to sync for persistent volume
Jul 02 08:54:52 k3s-server-1 k3s[11184]: I0702 08:54:52.951500   11184 pv_controller_base.go:308] Starting persistent volume controller
Jul 02 08:54:52 k3s-server-1 k3s[11184]: I0702 08:54:52.950746   11184 controllermanager.go:574] Started "persistentvolume-binder"
Jul 02 08:54:52 k3s-server-1 k3s[11184]: I0702 08:54:52.930814   11184 shared_informer.go:240] Waiting for caches to sync for ephemeral
Jul 02 08:54:52 k3s-server-1 k3s[11184]: I0702 08:54:52.930799   11184 controller.go:170] Starting ephemeral volume controller
Jul 02 08:54:52 k3s-server-1 k3s[11184]: I0702 08:54:52.930562   11184 controllermanager.go:574] Started "ephemeral-volume"
Jul 02 08:54:52 k3s-server-1 k3s[11184]: I0702 08:54:52.917922   11184 shared_informer.go:240] Waiting for caches to sync for node
Jul 02 08:54:52 k3s-server-1 k3s[11184]: I0702 08:54:52.917909   11184 node_ipam_controller.go:154] Starting ipam controller
Jul 02 08:54:52 k3s-server-1 k3s[11184]: W0702 08:54:52.917854   11184 controllermanager.go:553] "service" is disabled
Jul 02 08:54:52 k3s-server-1 k3s[11184]: I0702 08:54:52.917836   11184 controllermanager.go:574] Started "nodeipam"
Jul 02 08:54:52 k3s-server-1 k3s[11184]: I0702 08:54:52.917762   11184 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses.
Jul 02 08:54:52 k3s-server-1 k3s[11184]: I0702 08:54:52.917713   11184 range_allocator.go:110] No Service CIDR provided. Skipping filtering out service addresses.
Jul 02 08:54:52 k3s-server-1 k3s[11184]: I0702 08:54:52.916443   11184 range_allocator.go:82] Sending events to api server.
Jul 02 08:54:51 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:51.594299295+02:00" level=info msg="Updated coredns node hosts entry [10.23.1.2 k3s-server-1]"
Jul 02 08:54:51 k3s-server-1 k3s[11184]: E0702 08:54:51.588649   11184 node_controller.go:212] error syncing 'k3s-server-1': failed to get node modifiers from cloud provider: failed to find kubelet node IP from cloud provider, requeuing
Jul 02 08:54:51 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:51.588503908+02:00" level=info msg="Couldn't find node hostname annotation or label on node k3s-server-1"
Jul 02 08:54:51 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:51.588324898+02:00" level=info msg="Couldn't find node internal ip annotation or label on node k3s-server-1"
Jul 02 08:54:51 k3s-server-1 k3s[11184]: I0702 08:54:51.588048   11184 node_controller.go:390] Initializing node k3s-server-1 with cloud provider
Jul 02 08:54:51 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:51.388106523+02:00" level=info msg="Waiting for node k3s-server-1 CIDR not assigned yet"
Jul 02 08:54:50 k3s-server-1 k3s[11184]: E0702 08:54:50.798711   11184 node_controller.go:212] error syncing 'k3s-server-1': failed to get node modifiers from cloud provider: failed to find kubelet node IP from cloud provider, requeuing
Jul 02 08:54:50 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:50.798668883+02:00" level=info msg="Couldn't find node hostname annotation or label on node k3s-server-1"
Jul 02 08:54:50 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:50.798643016+02:00" level=info msg="Couldn't find node internal ip annotation or label on node k3s-server-1"
Jul 02 08:54:50 k3s-server-1 k3s[11184]: I0702 08:54:50.798559   11184 node_controller.go:390] Initializing node k3s-server-1 with cloud provider
Jul 02 08:54:49 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:49.371175459+02:00" level=info msg="Waiting for node k3s-server-1 CIDR not assigned yet"
Jul 02 08:54:48 k3s-server-1 k3s[11184]: E0702 08:54:48.238073   11184 node_controller.go:212] error syncing 'k3s-server-1': failed to get node modifiers from cloud provider: failed to find kubelet node IP from cloud provider, requeuing
Jul 02 08:54:48 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:48.237849678+02:00" level=info msg="Couldn't find node hostname annotation or label on node k3s-server-1"
Jul 02 08:54:48 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:48.237636450+02:00" level=info msg="Couldn't find node internal ip annotation or label on node k3s-server-1"
Jul 02 08:54:48 k3s-server-1 k3s[11184]: I0702 08:54:48.236918   11184 node_controller.go:390] Initializing node k3s-server-1 with cloud provider
Jul 02 08:54:47 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:47.361763106+02:00" level=info msg="Waiting for node k3s-server-1 CIDR not assigned yet"
Jul 02 08:54:46 k3s-server-1 k3s[11184]: E0702 08:54:46.956322   11184 node_controller.go:212] error syncing 'k3s-server-1': failed to get node modifiers from cloud provider: failed to find kubelet node IP from cloud provider, requeuing
Jul 02 08:54:46 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:46.956279254+02:00" level=info msg="Couldn't find node hostname annotation or label on node k3s-server-1"
Jul 02 08:54:46 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:46.956253618+02:00" level=info msg="Couldn't find node internal ip annotation or label on node k3s-server-1"
Jul 02 08:54:46 k3s-server-1 k3s[11184]: I0702 08:54:46.956161   11184 node_controller.go:390] Initializing node k3s-server-1 with cloud provider
Jul 02 08:54:46 k3s-server-1 k3s[11184]: E0702 08:54:46.315510   11184 node_controller.go:212] error syncing 'k3s-server-1': failed to get node modifiers from cloud provider: failed to find kubelet node IP from cloud provider, requeuing
Jul 02 08:54:46 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:46.315437636+02:00" level=info msg="Couldn't find node hostname annotation or label on node k3s-server-1"
Jul 02 08:54:46 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:46.315393779+02:00" level=info msg="Couldn't find node internal ip annotation or label on node k3s-server-1"
Jul 02 08:54:46 k3s-server-1 k3s[11184]: I0702 08:54:46.315280   11184 node_controller.go:390] Initializing node k3s-server-1 with cloud provider
Jul 02 08:54:45 k3s-server-1 k3s[11184]: E0702 08:54:45.994129   11184 node_controller.go:212] error syncing 'k3s-server-1': failed to get node modifiers from cloud provider: failed to find kubelet node IP from cloud provider, requeuing
Jul 02 08:54:45 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:45.994083862+02:00" level=info msg="Couldn't find node hostname annotation or label on node k3s-server-1"
Jul 02 08:54:45 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:45.994053280+02:00" level=info msg="Couldn't find node internal ip annotation or label on node k3s-server-1"
Jul 02 08:54:45 k3s-server-1 k3s[11184]: I0702 08:54:45.993973   11184 node_controller.go:390] Initializing node k3s-server-1 with cloud provider
Jul 02 08:54:45 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:45.880172625+02:00" level=info msg="Handling backend connection request [k3s-server-1]"
Jul 02 08:54:45 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:45.877209576+02:00" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"
Jul 02 08:54:45 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:45.876990995+02:00" level=info msg="Proxy done" err="context canceled" url="wss://127.0.0.1:6443/v1-k3s/connect"
Jul 02 08:54:45 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:45.876585995+02:00" level=info msg="Connecting to proxy" url="wss://192.168.178.2:6443/v1-k3s/connect"
Jul 02 08:54:45 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:45.876507918+02:00" level=info msg="Stopped tunnel to 127.0.0.1:6443"
Jul 02 08:54:45 k3s-server-1 k3s[11184]: E0702 08:54:45.833470   11184 node_controller.go:212] error syncing 'k3s-server-1': failed to get node modifiers from cloud provider: failed to find kubelet node IP from cloud provider, requeuing
Jul 02 08:54:45 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:45.833388071+02:00" level=info msg="Couldn't find node hostname annotation or label on node k3s-server-1"
Jul 02 08:54:45 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:45.833327445+02:00" level=info msg="Couldn't find node internal ip annotation or label on node k3s-server-1"
Jul 02 08:54:45 k3s-server-1 k3s[11184]: I0702 08:54:45.833187   11184 node_controller.go:390] Initializing node k3s-server-1 with cloud provider
Jul 02 08:54:45 k3s-server-1 k3s[11184]: E0702 08:54:45.752508   11184 node_controller.go:212] error syncing 'k3s-server-1': failed to get node modifiers from cloud provider: failed to find kubelet node IP from cloud provider, requeuing
Jul 02 08:54:45 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:45.752425045+02:00" level=info msg="Couldn't find node hostname annotation or label on node k3s-server-1"
Jul 02 08:54:45 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:45.752385517+02:00" level=info msg="Couldn't find node internal ip annotation or label on node k3s-server-1"
Jul 02 08:54:45 k3s-server-1 k3s[11184]: I0702 08:54:45.752272   11184 node_controller.go:390] Initializing node k3s-server-1 with cloud provider
Jul 02 08:54:45 k3s-server-1 k3s[11184]: E0702 08:54:45.711105   11184 node_controller.go:212] error syncing 'k3s-server-1': failed to get node modifiers from cloud provider: failed to find kubelet node IP from cloud provider, requeuing
Jul 02 08:54:45 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:45.711066932+02:00" level=info msg="Couldn't find node hostname annotation or label on node k3s-server-1"
Jul 02 08:54:45 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:45.711045831+02:00" level=info msg="Couldn't find node internal ip annotation or label on node k3s-server-1"
Jul 02 08:54:45 k3s-server-1 k3s[11184]: I0702 08:54:45.710946   11184 node_controller.go:390] Initializing node k3s-server-1 with cloud provider
Jul 02 08:54:45 k3s-server-1 k3s[11184]: E0702 08:54:45.690618   11184 node_controller.go:212] error syncing 'k3s-server-1': failed to get node modifiers from cloud provider: failed to find kubelet node IP from cloud provider, requeuing
Jul 02 08:54:45 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:45.690567115+02:00" level=info msg="Couldn't find node hostname annotation or label on node k3s-server-1"
Jul 02 08:54:45 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:45.690539507+02:00" level=info msg="Couldn't find node internal ip annotation or label on node k3s-server-1"
Jul 02 08:54:45 k3s-server-1 k3s[11184]: I0702 08:54:45.690457   11184 node_controller.go:390] Initializing node k3s-server-1 with cloud provider
Jul 02 08:54:45 k3s-server-1 k3s[11184]: E0702 08:54:45.679905   11184 node_controller.go:212] error syncing 'k3s-server-1': failed to get node modifiers from cloud provider: failed to find kubelet node IP from cloud provider, requeuing
Jul 02 08:54:45 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:45.679760620+02:00" level=info msg="Couldn't find node hostname annotation or label on node k3s-server-1"
Jul 02 08:54:45 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:45.679581539+02:00" level=info msg="Couldn't find node internal ip annotation or label on node k3s-server-1"
Jul 02 08:54:45 k3s-server-1 k3s[11184]: I0702 08:54:45.679317   11184 node_controller.go:390] Initializing node k3s-server-1 with cloud provider
Jul 02 08:54:45 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:45.677103515+02:00" level=info msg="Couldn't find node hostname annotation or label on node k3s-server-1"
Jul 02 08:54:45 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:45.677055944+02:00" level=info msg="Couldn't find node internal ip annotation or label on node k3s-server-1"
Jul 02 08:54:45 k3s-server-1 k3s[11184]: E0702 08:54:45.673387   11184 node_controller.go:212] error syncing 'k3s-server-1': failed to get node modifiers from cloud provider: failed to find kubelet node IP from cloud provider, requeuing
Jul 02 08:54:45 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:45.673195221+02:00" level=info msg="Couldn't find node hostname annotation or label on node k3s-server-1"
Jul 02 08:54:45 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:45.672955601+02:00" level=info msg="Couldn't find node internal ip annotation or label on node k3s-server-1"
Jul 02 08:54:45 k3s-server-1 k3s[11184]: I0702 08:54:45.672537   11184 node_controller.go:390] Initializing node k3s-server-1 with cloud provider
Jul 02 08:54:45 k3s-server-1 k3s[11184]: I0702 08:54:45.575070   11184 controllermanager.go:285] Started "cloud-node-lifecycle"
Jul 02 08:54:45 k3s-server-1 k3s[11184]: I0702 08:54:45.574988   11184 node_lifecycle_controller.go:76] Sending events to api server
Jul 02 08:54:45 k3s-server-1 k3s[11184]: I0702 08:54:45.572185   11184 node_controller.go:154] Waiting for informer caches to sync
Jul 02 08:54:45 k3s-server-1 k3s[11184]: I0702 08:54:45.571896   11184 controllermanager.go:285] Started "cloud-node"
Jul 02 08:54:45 k3s-server-1 k3s[11184]: I0702 08:54:45.571727   11184 node_controller.go:115] Sending events to api server.
Jul 02 08:54:45 k3s-server-1 k3s[11184]: E0702 08:54:45.567827   11184 controllermanager.go:418] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
Jul 02 08:54:45 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:45.350781014+02:00" level=info msg="Waiting for node k3s-server-1 CIDR not assigned yet"
Jul 02 08:54:44 k3s-server-1 k3s[11184]: I0702 08:54:44.047454   11184 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Jul 02 08:54:44 k3s-server-1 k3s[11184]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
Jul 02 08:54:44 k3s-server-1 k3s[11184]: E0702 08:54:44.047439   11184 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
Jul 02 08:54:44 k3s-server-1 k3s[11184]: W0702 08:54:44.047365   11184 handler_proxy.go:102] no RequestInfo found in the context
Jul 02 08:54:43 k3s-server-1 k3s[11184]: I0702 08:54:43.801773   11184 shared_informer.go:247] Caches are synced for service config
Jul 02 08:54:43 k3s-server-1 k3s[11184]: I0702 08:54:43.801695   11184 shared_informer.go:247] Caches are synced for endpoint slice config
Jul 02 08:54:43 k3s-server-1 k3s[11184]: I0702 08:54:43.701394   11184 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
Jul 02 08:54:43 k3s-server-1 k3s[11184]: I0702 08:54:43.701380   11184 config.go:224] Starting endpoint slice config controller
Jul 02 08:54:43 k3s-server-1 k3s[11184]: I0702 08:54:43.701342   11184 shared_informer.go:240] Waiting for caches to sync for service config
Jul 02 08:54:43 k3s-server-1 k3s[11184]: I0702 08:54:43.701288   11184 config.go:315] Starting service config controller
Jul 02 08:54:43 k3s-server-1 k3s[11184]: I0702 08:54:43.682312   11184 server.go:643] Version: v1.21.2+k3s1
Jul 02 08:54:43 k3s-server-1 k3s[11184]: W0702 08:54:43.681579   11184 server_others.go:519] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
Jul 02 08:54:43 k3s-server-1 k3s[11184]: I0702 08:54:43.681535   11184 server_others.go:220] creating dualStackProxier for iptables.
Jul 02 08:54:43 k3s-server-1 k3s[11184]: I0702 08:54:43.681502   11184 server_others.go:213] Using iptables Proxier.
Jul 02 08:54:43 k3s-server-1 k3s[11184]: I0702 08:54:43.681432   11184 server_others.go:207] kube-proxy running in dual-stack mode, IPv4-primary
Jul 02 08:54:43 k3s-server-1 k3s[11184]: I0702 08:54:43.676543   11184 server_others.go:141] Detected node IP 10.23.1.2
Jul 02 08:54:43 k3s-server-1 k3s[11184]: I0702 08:54:43.676488   11184 node.go:172] Successfully retrieved node IP: 10.23.1.2
Jul 02 08:54:43 k3s-server-1 k3s[11184]: I0702 08:54:43.399050   11184 reconciler.go:157] "Reconciler: start to sync state"
Jul 02 08:54:43 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:43.340553328+02:00" level=info msg="Waiting for node k3s-server-1 CIDR not assigned yet"
Jul 02 08:54:43 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:43.332409764+02:00" level=info msg="Active TLS secret k3s-serving (ver=338) (count 11): map[listener.cattle.io/cn-10.23.1.2:10.23.1.2 listener.cattle.io/cn-10.23.1.7:10.23.1.7 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-192.168.178.4:192.168.178.4 listener.cattle.io/cn-192.168.178.2:192.168.178.2 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=B72DFB953725D569AA882A37C9EA3B71B92B8463]"
Jul 02 08:54:42 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:42.988554151+02:00" level=info msg="Starting /v1, Kind=Secret controller"
Jul 02 08:54:42 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:42.988381791+02:00" level=info msg="Starting /v1, Kind=Endpoints controller"
Jul 02 08:54:42 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:42.988364442+02:00" level=info msg="Starting /v1, Kind=Pod controller"
Jul 02 08:54:42 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:42.988258101+02:00" level=info msg="Starting /v1, Kind=Service controller"
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.885633   11184 node_ipam_controller.go:91] Sending events to api server.
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.883160   11184 shared_informer.go:240] Waiting for caches to sync for taint
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.883025   11184 node_lifecycle_controller.go:539] Starting node controller
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.882676   11184 controllermanager.go:574] Started "nodelifecycle"
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.882493   11184 node_lifecycle_controller.go:505] Controller will reconcile labels.
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.882271   11184 taint_manager.go:163] "Sending events to api server"
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.881789   11184 node_lifecycle_controller.go:377] Sending events to api server.
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.870144   11184 shared_informer.go:240] Waiting for caches to sync for TTL
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.869972   11184 ttl_controller.go:121] Starting TTL controller
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.869522   11184 controllermanager.go:574] Started "ttl"
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.844722   11184 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.844504   11184 certificate_controller.go:118] Starting certificate controller "csrapproving"
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.843985   11184 controllermanager.go:574] Started "csrapproving"
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.815007   11184 shared_informer.go:240] Waiting for caches to sync for deployment
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.814665   11184 deployment_controller.go:153] "Starting controller" controller="deployment"
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.814015   11184 controllermanager.go:574] Started "deployment"
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.784516   11184 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.784361   11184 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.783940   11184 controllermanager.go:574] Started "endpointslicemirroring"
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.758562   11184 shared_informer.go:240] Waiting for caches to sync for PV protection
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.758227   11184 pv_protection_controller.go:83] Starting PV protection controller
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.757656   11184 controllermanager.go:574] Started "pv-protection"
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.730143   11184 shared_informer.go:240] Waiting for caches to sync for expand
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.730134   11184 expand_controller.go:327] Starting expand controller
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.729968   11184 controllermanager.go:574] Started "persistentvolume-expander"
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.704884   11184 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.704876   11184 clusterroleaggregation_controller.go:194] Starting ClusterRoleAggregator
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.704743   11184 controllermanager.go:574] Started "clusterrole-aggregation"
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.665336   11184 shared_informer.go:240] Waiting for caches to sync for disruption
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.665172   11184 disruption.go:363] Starting disruption controller
Jul 02 08:54:42 k3s-server-1 k3s[11184]: W0702 08:54:42.664848   11184 controllermanager.go:553] "cloud-node-lifecycle" is disabled
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.664496   11184 controllermanager.go:574] Started "disruption"
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.643169   11184 event.go:291] "Event occurred" object="kube-system/cloud-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="k3s-server-1_dd46cb66-5a31-4420-b0cf-b69d80ec76a1 became leader"
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.640254   11184 leaderelection.go:253] successfully acquired lease kube-system/cloud-controller-manager
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.586441   11184 leaderelection.go:243] attempting to acquire leader lease kube-system/cloud-controller-manager...
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.585123   11184 controllermanager.go:142] Version: v1.21.2+k3s1
Jul 02 08:54:42 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:42.579589688+02:00" level=info msg="Running cloud-controller-manager with args --profiling=false"
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.548247   11184 shared_informer.go:240] Waiting for caches to sync for namespace
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.548113   11184 namespace_controller.go:200] Starting namespace controller
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.547836   11184 controllermanager.go:574] Started "namespace"
Jul 02 08:54:42 k3s-server-1 k3s[11184]: E0702 08:54:42.547415   11184 namespaced_resources_deleter.go:161] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server could not find the requested resource
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.503265   11184 shared_informer.go:247] Caches are synced for tokens
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.432459   11184 shared_informer.go:240] Waiting for caches to sync for GC
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.432288   11184 gc_controller.go:89] Starting GC controller
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.431760   11184 controllermanager.go:574] Started "podgc"
Jul 02 08:54:42 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:42.387767182+02:00" level=info msg="Starting batch/v1, Kind=Job controller"
Jul 02 08:54:42 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:42.387751483+02:00" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller"
Jul 02 08:54:42 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:42.387690191+02:00" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChartConfig controller"
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.383863   11184 shared_informer.go:240] Waiting for caches to sync for tokens
Jul 02 08:54:42 k3s-server-1 k3s[11184]: E0702 08:54:42.106368   11184 kubelet.go:1870] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.106132   11184 kubelet.go:1846] "Starting kubelet main sync loop"
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.105932   11184 status_manager.go:157] "Starting to sync pod status with apiserver"
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.105540   11184 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv6
Jul 02 08:54:42 k3s-server-1 k3s[11184]: I0702 08:54:42.034038   11184 apiserver.go:52] "Watching apiserver"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.928232   11184 controller.go:611] quota admission added evaluator for: deployments.apps
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.872292   11184 controller.go:611] quota admission added evaluator for: serviceaccounts
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.849216   11184 controller.go:611] quota admission added evaluator for: addons.k3s.cattle.io
Jul 02 08:54:41 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:41.773358427+02:00" level=warning msg="Unable to fetch coredns config map: configmap \"coredns\" not found"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.766236   11184 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv4
Jul 02 08:54:41 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:41.685952658+02:00" level=warning msg="Unable to fetch coredns config map: configmap \"coredns\" not found"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:41.685724950+02:00" level=info msg="Cluster dns configmap has been set successfully"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.684098   11184 leaderelection.go:253] successfully acquired lease kube-system/k3s
Jul 02 08:54:41 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:41.630555684+02:00" level=info msg="Starting /v1, Kind=Node controller"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:41.629561361+02:00" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.628980   11184 leaderelection.go:243] attempting to acquire leader lease kube-system/k3s...
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.413953   11184 plugin_manager.go:114] "Starting Kubelet Plugin Manager"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.385005   11184 manager.go:600] "Failed to retrieve checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.381480   11184 request.go:668] Waited for 1.055259463s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:6444/apis/extensions/v1beta1?timeout=32s
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.351687   11184 policy_none.go:44] "None policy: Start"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.346278   11184 state_mem.go:36] "Initialized new in-memory state store"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.346107   11184 cpu_manager.go:200] "Reconciling" reconcilePeriod="10s"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.345821   11184 cpu_manager.go:199] "Starting CPU manager" policy="none"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:41.320192340+02:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:41.319978347+02:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:41.319717870+02:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:41.319474507+02:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.319101   11184 kubelet_node_status.go:74] "Successfully registered node" node="k3s-server-1"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:41.317440142+02:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:41.317165386+02:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:41.316957488+02:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:41.316728461+02:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:41.316510579+02:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:41.316296386+02:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:41.316039645+02:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:41.315745263+02:00" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-crd-9.18.2.tgz"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:41.315254556+02:00" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-9.18.2.tgz"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:41.306900610+02:00" level=info msg="Done waiting for CRD helmchartconfigs.helm.cattle.io to become available"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: E0702 08:54:41.291545   11184 kubelet.go:2291] "Error getting node" err="node \"k3s-server-1\" not found"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.187684   11184 kubelet_node_status.go:71] "Attempting to register node" node="k3s-server-1"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: E0702 08:54:41.174281   11184 kubelet.go:2291] "Error getting node" err="node \"k3s-server-1\" not found"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: E0702 08:54:41.095027   11184 kubelet.go:1306] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: E0702 08:54:41.094678   11184 cri_stats_provider.go:369] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.075534   11184 desired_state_of_world_populator.go:141] "Desired state populator starts to run"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.073944   11184 volume_manager.go:271] "Starting Kubelet Volume Manager"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.072042   11184 server.go:405] "Adding debug handlers to kubelet server"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.071160   11184 server.go:149] "Starting to listen" address="0.0.0.0" port=10250
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.065386   11184 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.048909   11184 server.go:1191] "Started kubelet"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: W0702 08:54:41.048122   11184 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.047143   11184 kuberuntime_manager.go:222] "Container runtime initialized" containerRuntime="containerd" version="v1.4.4-k3s2" apiVersion="v1alpha2"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.018209   11184 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.018068   11184 kubelet.go:283] "Adding apiserver pod source"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.017942   11184 kubelet.go:272] "Adding static pod path" path="/var/lib/rancher/k3s/agent/pod-manifests"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.017797   11184 kubelet.go:404] "Attempting to sync node with API server"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.017445   11184 container_manager_linux.go:332] "Creating device plugin manager" devicePluginEnabled=true
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.017339   11184 container_manager_linux.go:327] "Initializing Topology Manager" policy="none" scope="container"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.017213   11184 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.017024   11184 container_manager_linux.go:296] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none Rootless:false}
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.016814   11184 container_manager_linux.go:291] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Jul 02 08:54:41 k3s-server-1 k3s[11184]: I0702 08:54:41.016295   11184 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Jul 02 08:54:40 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:40.872984329+02:00" level=warning msg="Unable to watch for tunnel endpoints: unknown (get endpoints)"
Jul 02 08:54:40 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:40.643896450+02:00" level=info msg="Waiting for CRD helmchartconfigs.helm.cattle.io to become available"
Jul 02 08:54:40 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:40.643844328+02:00" level=info msg="Done waiting for CRD helmcharts.helm.cattle.io to become available"
Jul 02 08:54:40 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:40.133316571+02:00" level=info msg="Waiting for CRD helmcharts.helm.cattle.io to become available"
Jul 02 08:54:40 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:40.132547444+02:00" level=info msg="Done waiting for CRD addons.k3s.cattle.io to become available"
Jul 02 08:54:40 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:40.037113022+02:00" level=info msg="Waiting for node k3s-server-1: nodes \"k3s-server-1\" not found"
Jul 02 08:54:39 k3s-server-1 k3s[11184]: I0702 08:54:39.676452   11184 leaderelection.go:253] successfully acquired lease kube-system/kube-scheduler
Jul 02 08:54:39 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:39.627965009+02:00" level=info msg="Waiting for CRD addons.k3s.cattle.io to become available"
Jul 02 08:54:39 k3s-server-1 k3s[11184]: I0702 08:54:39.620628   11184 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-scheduler...
Jul 02 08:54:39 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:39.569331621+02:00" level=info msg="Creating CRD helmchartconfigs.helm.cattle.io"
Jul 02 08:54:39 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:39.558357899+02:00" level=info msg="Creating CRD helmcharts.helm.cattle.io"
Jul 02 08:54:39 k3s-server-1 k3s[11184]: I0702 08:54:39.557885   11184 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="k3s-server-1_319047d6-a434-4d5e-b6c9-d09f350aca0a became leader"
Jul 02 08:54:39 k3s-server-1 k3s[11184]: I0702 08:54:39.555614   11184 leaderelection.go:253] successfully acquired lease kube-system/kube-controller-manager
Jul 02 08:54:39 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:39.547068475+02:00" level=info msg="Creating CRD addons.k3s.cattle.io"
Jul 02 08:54:39 k3s-server-1 k3s[11184]: I0702 08:54:39.542080   11184 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
Jul 02 08:54:39 k3s-server-1 k3s[11184]: I0702 08:54:39.520510   11184 deprecated_insecure_serving.go:51] Serving healthz insecurely on 127.0.0.1:10251
Jul 02 08:54:39 k3s-server-1 k3s[11184]: W0702 08:54:39.520498   11184 authentication.go:47] Authentication is disabled
Jul 02 08:54:39 k3s-server-1 k3s[11184]: W0702 08:54:39.520457   11184 authorization.go:47] Authorization is disabled
Jul 02 08:54:39 k3s-server-1 k3s[11184]: I0702 08:54:39.497634   11184 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-controller-manager...
Jul 02 08:54:39 k3s-server-1 k3s[11184]: I0702 08:54:39.495859   11184 deprecated_insecure_serving.go:53] Serving insecurely on 127.0.0.1:10252
Jul 02 08:54:39 k3s-server-1 k3s[11184]: I0702 08:54:39.495351   11184 controllermanager.go:175] Version: v1.21.2+k3s1
Jul 02 08:54:39 k3s-server-1 k3s[11184]: Flag --address has been deprecated, see --bind-address instead.
Jul 02 08:54:39 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:39.491188828+02:00" level=info msg="k3s is up and running"
Jul 02 08:54:39 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:39.491118873+02:00" level=info msg="Kube API server is now running"
Jul 02 08:54:39 k3s-server-1 k3s[11184]: E0702 08:54:39.226938   11184 node.go:161] Failed to retrieve node info: nodes "k3s-server-1" not found
Jul 02 08:54:39 k3s-server-1 k3s[11184]: I0702 08:54:39.188146   11184 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
Jul 02 08:54:39 k3s-server-1 k3s[11184]: I0702 08:54:39.183560   11184 controller.go:611] quota admission added evaluator for: endpoints
Jul 02 08:54:39 k3s-server-1 k3s[11184]: W0702 08:54:39.181973   11184 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.178.2]
Jul 02 08:54:39 k3s-server-1 k3s[11184]: I0702 08:54:39.052886   11184 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
Jul 02 08:54:39 k3s-server-1 k3s[11184]: I0702 08:54:39.011418   11184 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
Jul 02 08:54:38 k3s-server-1 k3s[11184]: I0702 08:54:38.446647   11184 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
Jul 02 08:54:38 k3s-server-1 k3s[11184]: I0702 08:54:38.446330   11184 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
Jul 02 08:54:38 k3s-server-1 k3s[11184]: I0702 08:54:38.439980   11184 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
Jul 02 08:54:38 k3s-server-1 k3s[11184]: I0702 08:54:38.410505   11184 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
Jul 02 08:54:38 k3s-server-1 k3s[11184]: I0702 08:54:38.408851   11184 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
Jul 02 08:54:37 k3s-server-1 k3s[11184]: E0702 08:54:37.587554   11184 controller.go:156] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.178.2, ResourceVersion: 0, AdditionalErrorMsg:
Jul 02 08:54:37 k3s-server-1 k3s[11184]: E0702 08:54:37.539657   11184 controller.go:151] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.43.0.1"}: failed to allocated ip:10.43.0.1 with error:cannot allocate resources of type serviceipallocations at this time
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.535334   11184 shared_informer.go:247] Caches are synced for crd-autoregister
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.526340   11184 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.517506   11184 shared_informer.go:247] Caches are synced for node_authorizer
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.512924   11184 cache.go:39] Caches are synced for AvailableConditionController controller
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.512678   11184 cache.go:39] Caches are synced for autoregister controller
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.511661   11184 apf_controller.go:299] Running API Priority and Fairness config worker
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.510525   11184 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
Jul 02 08:54:37 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:37.481105077+02:00" level=info msg="Waiting for cloudcontroller rbac role to be created"
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.474415   11184 controller.go:611] quota admission added evaluator for: namespaces
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.443298   11184 crd_finalizer.go:266] Starting CRDFinalizer
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.443179   11184 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.443057   11184 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.442908   11184 establishing_controller.go:76] Starting EstablishingController
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.442751   11184 naming_controller.go:291] Starting NamingConditionController
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.442442   11184 controller.go:86] Starting OpenAPI controller
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.435192   11184 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.434836   11184 crdregistration_controller.go:111] Starting crd-autoregister controller
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.426695   11184 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.426324   11184 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.426142   11184 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.425804   11184 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.425690   11184 apiservice_controller.go:97] Starting APIServiceRegistrationController
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.425521   11184 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.425249   11184 customresource_discovery_controller.go:209] Starting DiscoveryController
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.412790   11184 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.412679   11184 available_controller.go:475] Starting AvailableConditionController
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.412552   11184 cache.go:32] Waiting for caches to sync for autoregister controller
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.412407   11184 autoregister_controller.go:141] Starting autoregister controller
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.411898   11184 controller.go:83] Starting OpenAPI AggregationController
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.411497   11184 apf_controller.go:294] Starting API Priority and Fairness config controller
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.410826   11184 tlsconfig.go:240] Starting DynamicServingCertificateController
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.410674   11184 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.410547   11184 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.410382   11184 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.410197   11184 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
Jul 02 08:54:37 k3s-server-1 k3s[11184]: I0702 08:54:37.409399   11184 secure_serving.go:197] Serving securely on 127.0.0.1:6444
Jul 02 08:54:37 k3s-server-1 k3s[11184]: E0702 08:54:37.126360   11184 node.go:161] Failed to retrieve node info: nodes "k3s-server-1" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
Jul 02 08:54:36 k3s-server-1 k3s[11184]: E0702 08:54:36.039490   11184 node.go:161] Failed to retrieve node info: nodes "k3s-server-1" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
Jul 02 08:54:36 k3s-server-1 k3s[11184]: I0702 08:54:36.002366   11184 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt
Jul 02 08:54:35 k3s-server-1 k3s[11184]: W0702 08:54:35.929271   11184 server.go:220] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
Jul 02 08:54:35 k3s-server-1 k3s[11184]: I0702 08:54:35.920780   11184 server.go:436] "Kubelet version" kubeletVersion="v1.21.2+k3s1"
Jul 02 08:54:35 k3s-server-1 k3s[11184]: Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
Jul 02 08:54:35 k3s-server-1 k3s[11184]: Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.
Jul 02 08:54:35 k3s-server-1 k3s[11184]: Flag --cni-bin-dir has been deprecated, will be removed along with dockershim.
Jul 02 08:54:35 k3s-server-1 k3s[11184]: Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Jul 02 08:54:35 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:35.865150486+02:00" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=k3s-server-1 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
Jul 02 08:54:35 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:35.863292733+02:00" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/57d64d4b123cea8e276484f00ab3dfa7178a00a35368aa6b43df3e3bd8ce032d/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3s-server-1 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --node-ip=10.23.1.2 --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
Jul 02 08:54:35 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:35.861595547+02:00" level=info msg="Handling backend connection request [k3s-server-1]"
Jul 02 08:54:35 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:35.854901612+02:00" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
Jul 02 08:54:35 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:35.823078995+02:00" level=info msg="Containerd is now running"
Jul 02 08:54:34 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:34.793374147+02:00" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
Jul 02 08:54:34 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:54:34.793+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {/run/k3s/containerd/containerd.sock   0 }. Err :connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory\". Reconnecting..."}
Jul 02 08:54:34 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:34.793152700+02:00" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
Jul 02 08:54:34 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:34.789614766+02:00" level=info msg="Set sysctl 'net/ipv4/conf/all/forwarding' to 1"
Jul 02 08:54:34 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:34.789580811+02:00" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600"
Jul 02 08:54:34 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:34.789541637+02:00" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400"
Jul 02 08:54:34 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:34.789505364+02:00" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_max' to 131072"
Jul 02 08:54:34 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:34.789392708+02:00" level=info msg="Set sysctl 'net/ipv4/conf/default/forwarding' to 1"
Jul 02 08:54:34 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:34.744970532+02:00" level=info msg="Module br_netfilter was already loaded"
Jul 02 08:54:34 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:34.718831917+02:00" level=info msg="Module overlay was already loaded"
Jul 02 08:54:34 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:34.589238110+02:00" level=info msg="certificate CN=system:node:k3s-server-1,O=system:nodes signed by CN=k3s-client-ca@1625208868: notBefore=2021-07-02 06:54:28 +0000 UTC notAfter=2022-07-02 06:54:34 +0000 UTC"
Jul 02 08:54:34 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:34.586695208+02:00" level=info msg="certificate CN=k3s-server-1 signed by CN=k3s-server-ca@1625208868: notBefore=2021-07-02 06:54:28 +0000 UTC notAfter=2022-07-02 06:54:34 +0000 UTC"
Jul 02 08:54:34 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:34.578433702+02:00" level=info msg="Cluster-Http-Server 2021/07/02 08:54:34 http: TLS handshake error from 127.0.0.1:50960: remote error: tls: bad certificate"
Jul 02 08:54:34 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:34.574860756+02:00" level=info msg="Cluster-Http-Server 2021/07/02 08:54:34 http: TLS handshake error from 127.0.0.1:50948: remote error: tls: bad certificate"
Jul 02 08:54:34 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:34.355173318+02:00" level=info msg="Waiting for API server to become available"
Jul 02 08:54:34 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:34.353371556+02:00" level=info msg="Run: k3s kubectl"
Jul 02 08:54:34 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:34.353332028+02:00" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Jul 02 08:54:34 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:34.352166530+02:00" level=info msg="To join node to cluster: k3s agent -s https://192.168.178.2:6443 -t ${NODE_TOKEN}"
Jul 02 08:54:34 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:34.352103514+02:00" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
Jul 02 08:54:34 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:34.350338627+02:00" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Jul 02 08:54:34 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:34.349636623+02:00" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --port=10251 --profiling=false --secure-port=0"
Jul 02 08:54:34 k3s-server-1 k3s[11184]: I0702 08:54:34.324229   11184 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Jul 02 08:54:34 k3s-server-1 k3s[11184]: I0702 08:54:34.324189   11184 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Jul 02 08:54:34 k3s-server-1 k3s[11184]: W0702 08:54:34.308936   11184 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources.
Jul 02 08:54:34 k3s-server-1 k3s[11184]: W0702 08:54:34.308648   11184 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources.
Jul 02 08:54:34 k3s-server-1 k3s[11184]: W0702 08:54:34.298439   11184 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
Jul 02 08:54:34 k3s-server-1 k3s[11184]: W0702 08:54:34.293603   11184 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Jul 02 08:54:34 k3s-server-1 k3s[11184]: W0702 08:54:34.282745   11184 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Jul 02 08:54:34 k3s-server-1 k3s[11184]: W0702 08:54:34.275923   11184 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Jul 02 08:54:34 k3s-server-1 k3s[11184]: W0702 08:54:34.257609   11184 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Jul 02 08:54:33 k3s-server-1 k3s[11184]: I0702 08:54:33.645834   11184 rest.go:130] the default service ipfamily for this cluster is: IPv4
Jul 02 08:54:33 k3s-server-1 k3s[11184]: I0702 08:54:33.553767   11184 instance.go:283] Using reconciler: lease
Jul 02 08:54:33 k3s-server-1 k3s[11184]: I0702 08:54:33.517458   11184 shared_informer.go:240] Waiting for caches to sync for node_authorizer
Jul 02 08:54:33 k3s-server-1 k3s[11184]: I0702 08:54:33.517125   11184 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Jul 02 08:54:33 k3s-server-1 k3s[11184]: I0702 08:54:33.517091   11184 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Jul 02 08:54:33 k3s-server-1 k3s[11184]: I0702 08:54:33.515835   11184 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Jul 02 08:54:33 k3s-server-1 k3s[11184]: I0702 08:54:33.515796   11184 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Jul 02 08:54:32 k3s-server-1 k3s[11184]: I0702 08:54:32.961901   11184 server.go:195] Version: v1.21.2+k3s1
Jul 02 08:54:32 k3s-server-1 k3s[11184]: I0702 08:54:32.961492   11184 server.go:656] external host was not specified, using 192.168.178.2
Jul 02 08:54:32 k3s-server-1 k3s[11184]: Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
Jul 02 08:54:32 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:32.960284715+02:00" level=info msg="Saving current etcd snapshot set to k3s-etcd-snapshots ConfigMap"
Jul 02 08:54:32 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:32.959827215+02:00" level=info msg="etcd data store connection OK"
Jul 02 08:54:32 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:32.948+0200","caller":"embed/serve.go:191","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
Jul 02 08:54:32 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:32.947+0200","caller":"embed/serve.go:191","msg":"serving client traffic securely","address":"10.23.1.2:2379"}
Jul 02 08:54:32 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:32.945+0200","caller":"etcdserver/server.go:2039","msg":"published local member to cluster through raft","local-member-id":"1ad553bddb402590","local-member-attributes":"{Name:k3s-server-1-03f2f2e6 ClientURLs:[https://10.23.1.2:2379]}","request-path":"/0/members/1ad553bddb402590/attributes","cluster-id":"58482255ea06577","publish-timeout":"15s"}
Jul 02 08:54:32 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:32.945+0200","caller":"etcdserver/server.go:2562","msg":"cluster version is updated","cluster-version":"3.4"}
Jul 02 08:54:32 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:32.945+0200","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.4"}
Jul 02 08:54:32 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:32.945+0200","caller":"membership/cluster.go:558","msg":"set initial cluster version","cluster-id":"58482255ea06577","local-member-id":"1ad553bddb402590","cluster-version":"3.4"}
Jul 02 08:54:32 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:32.943+0200","caller":"etcdserver/server.go:2530","msg":"setting up initial cluster version","cluster-version":"3.4"}
Jul 02 08:54:32 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:32.943+0200","caller":"raft/node.go:325","msg":"raft.node: 1ad553bddb402590 elected leader 1ad553bddb402590 at term 2"}
Jul 02 08:54:32 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:32.943+0200","caller":"raft/raft.go:765","msg":"1ad553bddb402590 became leader at term 2"}
Jul 02 08:54:32 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:32.943+0200","caller":"raft/raft.go:824","msg":"1ad553bddb402590 received MsgVoteResp from 1ad553bddb402590 at term 2"}
Jul 02 08:54:32 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:32.943+0200","caller":"raft/raft.go:713","msg":"1ad553bddb402590 became candidate at term 2"}
Jul 02 08:54:32 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:32.942+0200","caller":"raft/raft.go:923","msg":"1ad553bddb402590 is starting a new election at term 1"}
Jul 02 08:54:28 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:28.953244350+02:00" level=info msg="Running kube-apiserver --advertise-address=192.168.178.2 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Jul 02 08:54:28 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:28.953+0200","caller":"embed/etcd.go:781","msg":"serving metrics","address":"http://127.0.0.1:2381"}
Jul 02 08:54:28 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:28.952+0200","caller":"embed/etcd.go:244","msg":"now serving peer/client/metrics","local-member-id":"1ad553bddb402590","initial-advertise-peer-urls":["https://10.23.1.2:2380"],"listen-peer-urls":["https://10.23.1.2:2380"],"advertise-client-urls":["https://10.23.1.2:2379"],"listen-client-urls":["https://10.23.1.2:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
Jul 02 08:54:28 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:28.952+0200","caller":"embed/etcd.go:579","msg":"serving peer traffic","address":"10.23.1.2:2380"}
Jul 02 08:54:28 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:28.952+0200","caller":"embed/etcd.go:711","msg":"starting with client TLS","tls-info":"cert = /var/lib/rancher/k3s/server/tls/etcd/server-client.crt, key = /var/lib/rancher/k3s/server/tls/etcd/server-client.key, trusted-ca = /var/lib/rancher/k3s/server/tls/etcd/server-ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
Jul 02 08:54:28 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:28.951+0200","caller":"membership/cluster.go:392","msg":"added member","cluster-id":"58482255ea06577","local-member-id":"1ad553bddb402590","added-peer-id":"1ad553bddb402590","added-peer-peer-urls":["https://10.23.1.2:2380"]}
Jul 02 08:54:28 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:28.951+0200","caller":"raft/raft.go:1530","msg":"1ad553bddb402590 switched to configuration voters=(1933543689917834640)"}
Jul 02 08:54:28 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:28.950+0200","caller":"etcdserver/server.go:669","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"1ad553bddb402590","forward-ticks":9,"forward-duration":"4.5s","election-ticks":10,"election-timeout":"5s"}
Jul 02 08:54:28 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:28.949+0200","caller":"etcdserver/server.go:803","msg":"starting etcd server","local-member-id":"1ad553bddb402590","local-server-version":"3.4.13","cluster-version":"to_be_decided"}
Jul 02 08:54:28 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:28.948+0200","caller":"etcdserver/quota.go:98","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
Jul 02 08:54:28 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:54:28.946+0200","caller":"auth/store.go:1366","msg":"simple token is not cryptographically signed"}
Jul 02 08:54:28 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:28.942+0200","caller":"raft/raft.go:1530","msg":"1ad553bddb402590 switched to configuration voters=(1933543689917834640)"}
Jul 02 08:54:28 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:28.942+0200","caller":"raft/raft.go:700","msg":"1ad553bddb402590 became follower at term 1"}
Jul 02 08:54:28 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:28.942+0200","caller":"raft/raft.go:383","msg":"newRaft 1ad553bddb402590 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"}
Jul 02 08:54:28 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:28.942+0200","caller":"raft/raft.go:700","msg":"1ad553bddb402590 became follower at term 0"}
Jul 02 08:54:28 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:28.941+0200","caller":"raft/raft.go:1530","msg":"1ad553bddb402590 switched to configuration voters=()"}
Jul 02 08:54:28 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:28.941+0200","caller":"etcdserver/raft.go:486","msg":"starting local member","local-member-id":"1ad553bddb402590","cluster-id":"58482255ea06577"}
Jul 02 08:54:28 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:28.935+0200","caller":"etcdserver/backend.go:80","msg":"opened backend db","path":"/var/lib/rancher/k3s/server/db/etcd/member/snap/db","took":"6.424706ms"}
Jul 02 08:54:28 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:28.934486600+02:00" level=info msg="Active TLS secret  (ver=) (count 11): map[listener.cattle.io/cn-10.23.1.2:10.23.1.2 listener.cattle.io/cn-10.23.1.7:10.23.1.7 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-192.168.178.4:192.168.178.4 listener.cattle.io/cn-192.168.178.2:192.168.178.2 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=B72DFB953725D569AA882A37C9EA3B71B92B8463]"
Jul 02 08:54:28 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:28.925441870+02:00" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1625208868: notBefore=2021-07-02 06:54:28 +0000 UTC notAfter=2022-07-02 06:54:28 +0000 UTC"
Jul 02 08:54:28 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:28.927+0200","caller":"embed/etcd.go:302","msg":"starting an etcd server","etcd-version":"3.4.13","git-sha":"Not provided (use ./build instead of go build)","go-version":"go1.16.4","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":false,"name":"k3s-server-1-03f2f2e6","data-dir":"/var/lib/rancher/k3s/server/db/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/rancher/k3s/server/db/etcd/member","force-new-cluster":false,"heartbeat-interval":"500ms","election-timeout":"5s","initial-election-tick-advance":true,"snapshot-count":100000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://10.23.1.2:2380"],"listen-peer-urls":["https://10.23.1.2:2380"],"advertise-client-urls":["https://10.23.1.2:2379"],"listen-client-urls":["https://10.23.1.2:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"k3s-server-1-03f2f2e6=https://10.23.1.2:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-size-bytes":2147483648,"pre-vote":false,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","auto-compaction-mode":"","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":""}
Jul 02 08:54:28 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:28.926+0200","caller":"embed/etcd.go:127","msg":"configuring client listeners","listen-client-urls":["https://10.23.1.2:2379","https://127.0.0.1:2379"]}
Jul 02 08:54:28 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:28.925+0200","caller":"embed/etcd.go:468","msg":"starting with peer TLS","tls-info":"cert = /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt, key = /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key, trusted-ca = /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
Jul 02 08:54:28 k3s-server-1 k3s[11184]: {"level":"info","ts":"2021-07-02T08:54:28.925+0200","caller":"embed/etcd.go:117","msg":"configuring peer listeners","listen-peer-urls":["https://10.23.1.2:2380"]}
Jul 02 08:54:28 k3s-server-1 k3s[11184]: {"level":"warn","ts":"2021-07-02T08:54:28.924+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:54:28 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:28.686639869+02:00" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1625208868: notBefore=2021-07-02 06:54:28 +0000 UTC notAfter=2022-07-02 06:54:28 +0000 UTC"
Jul 02 08:54:28 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:28.685602114+02:00" level=info msg="certificate CN=etcd-client signed by CN=etcd-server-ca@1625208868: notBefore=2021-07-02 06:54:28 +0000 UTC notAfter=2022-07-02 06:54:28 +0000 UTC"
Jul 02 08:54:28 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:28.685095725+02:00" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1625208868: notBefore=2021-07-02 06:54:28 +0000 UTC notAfter=2022-07-02 06:54:28 +0000 UTC"
Jul 02 08:54:28 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:28.684198890+02:00" level=info msg="certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1625208868: notBefore=2021-07-02 06:54:28 +0000 UTC notAfter=2022-07-02 06:54:28 +0000 UTC"
Jul 02 08:54:28 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:28.683286639+02:00" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1625208868: notBefore=2021-07-02 06:54:28 +0000 UTC notAfter=2022-07-02 06:54:28 +0000 UTC"
Jul 02 08:54:28 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:28.682277367+02:00" level=info msg="certificate CN=k3s-cloud-controller-manager signed by CN=k3s-client-ca@1625208868: notBefore=2021-07-02 06:54:28 +0000 UTC notAfter=2022-07-02 06:54:28 +0000 UTC"
Jul 02 08:54:28 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:28.681727551+02:00" level=info msg="certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1625208868: notBefore=2021-07-02 06:54:28 +0000 UTC notAfter=2022-07-02 06:54:28 +0000 UTC"
Jul 02 08:54:28 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:28.681232030+02:00" level=info msg="certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1625208868: notBefore=2021-07-02 06:54:28 +0000 UTC notAfter=2022-07-02 06:54:28 +0000 UTC"
Jul 02 08:54:28 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:28.680691647+02:00" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-client-ca@1625208868: notBefore=2021-07-02 06:54:28 +0000 UTC notAfter=2022-07-02 06:54:28 +0000 UTC"
Jul 02 08:54:28 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:28.680128687+02:00" level=info msg="certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1625208868: notBefore=2021-07-02 06:54:28 +0000 UTC notAfter=2022-07-02 06:54:28 +0000 UTC"
Jul 02 08:54:28 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:28.679534275+02:00" level=info msg="certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1625208868: notBefore=2021-07-02 06:54:28 +0000 UTC notAfter=2022-07-02 06:54:28 +0000 UTC"
Jul 02 08:54:28 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:28.678750061+02:00" level=info msg="certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1625208868: notBefore=2021-07-02 06:54:28 +0000 UTC notAfter=2022-07-02 06:54:28 +0000 UTC"
Jul 02 08:54:28 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:28.666707980+02:00" level=info msg="Managed etcd cluster initializing"
Jul 02 08:54:28 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:28.651529128+02:00" level=info msg="Starting k3s v1.21.2+k3s1 (5a67e8dc)"
Jul 02 08:54:25 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:25+02:00" level=info msg="Preparing data dir /var/lib/rancher/k3s/data/57d64d4b123cea8e276484f00ab3dfa7178a00a35368aa6b43df3e3bd8ce032d"
Jul 02 08:54:25 k3s-server-1 k3s[11184]: time="2021-07-02T08:54:25+02:00" level=info msg="Acquiring lock file /var/lib/rancher/k3s/data/.lock"
Jul 02 08:54:25 k3s-server-1 sh[11181]: Failed to get unit file state for nm-cloud-setup.service: No such file or directory
Jul 02 08:54:25 k3s-server-1 sh[11180]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
Jul 02 08:54:25 k3s-server-1 systemd[1]: Starting Lightweight Kubernetes...
  

k3s-server-2

-- Logs begin at Thu 2021-07-01 12:28:54 CEST, end at Fri 2021-07-02 08:59:22 CEST. --
Jul 02 08:59:22 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:59:22.315+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:59:22 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:59:22.144+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:59:20 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:59:20.127+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:59:19 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:59:19.864+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:59:18 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:59:18.457+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:59:18 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:59:18.349+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:59:17 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:59:17.456+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:59:17 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:59:17.349+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:59:12 k3s-server-2 k3s[11116]: time="2021-07-02T08:59:12.454124315+02:00" level=info msg="Failed to test data store connection: context deadline exceeded"
Jul 02 08:59:12 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:59:12.453+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"passthrough:///https://127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
Jul 02 08:59:12 k3s-server-2 k3s[11116]: time="2021-07-02T08:59:12.352380657+02:00" level=error msg="Failed to check local etcd status for learner management: context deadline exceeded"
Jul 02 08:59:12 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:59:12.351+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"passthrough:///https://127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
Jul 02 08:59:11 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:59:11.997+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:59:10 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:59:10.821+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:59:07 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:59:07.771+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:59:07 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:59:07.314+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:59:04 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:59:04.935+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:59:04 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:59:04.838+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:59:03 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:59:03.455+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:59:03 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:59:03.352+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:59:02 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:59:02.453+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:59:02 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:59:02.349+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:57 k3s-server-2 k3s[11116]: time="2021-07-02T08:58:57.452096714+02:00" level=info msg="Failed to test data store connection: context deadline exceeded"
Jul 02 08:58:57 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:57.451+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"passthrough:///https://127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
Jul 02 08:58:57 k3s-server-2 k3s[11116]: time="2021-07-02T08:58:57.349616873+02:00" level=error msg="Failed to check local etcd status for learner management: context deadline exceeded"
Jul 02 08:58:57 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:57.348+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"passthrough:///https://127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
Jul 02 08:58:57 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:57.214+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:57 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:57.163+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:52 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:52.695+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:52 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:52.453+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:50 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:50.233+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:49 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:49.828+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:48 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:48.452+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:48 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:48.352+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:47 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:47.451+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:47 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:47.351+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:42 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:42.847+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:42 k3s-server-2 k3s[11116]: time="2021-07-02T08:58:42.449823981+02:00" level=info msg="Failed to test data store connection: context deadline exceeded"
Jul 02 08:58:42 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:42.449+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"passthrough:///https://127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
Jul 02 08:58:42 k3s-server-2 k3s[11116]: time="2021-07-02T08:58:42.348895379+02:00" level=error msg="Failed to check local etcd status for learner management: context deadline exceeded"
Jul 02 08:58:42 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:42.348+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"passthrough:///https://127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
Jul 02 08:58:41 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:41.738+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:41 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:41.111+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:37 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:37.596+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:37 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:37.459+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:35 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:35.240+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:34 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:34.756+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:33 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:33.450+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:33 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:33.350+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:32 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:32.449+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:32 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:32.348+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:27 k3s-server-2 k3s[11116]: time="2021-07-02T08:58:27.447958709+02:00" level=info msg="Failed to test data store connection: context deadline exceeded"
Jul 02 08:58:27 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:27.447+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"passthrough:///https://127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
Jul 02 08:58:27 k3s-server-2 k3s[11116]: time="2021-07-02T08:58:27.349083572+02:00" level=error msg="Failed to check local etcd status for learner management: context deadline exceeded"
Jul 02 08:58:27 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:27.348+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"passthrough:///https://127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
Jul 02 08:58:26 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:26.352+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:26 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:26.116+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:22 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:22.281+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:21 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:21.747+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:20 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:20.106+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:19 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:19.673+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:18 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:18.448+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:18 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:18.350+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:17 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:17.447+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:17 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:17.348+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:12 k3s-server-2 k3s[11116]: time="2021-07-02T08:58:12.446228954+02:00" level=info msg="Failed to test data store connection: context deadline exceeded"
Jul 02 08:58:12 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:12.446+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"passthrough:///https://127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
Jul 02 08:58:12 k3s-server-2 k3s[11116]: time="2021-07-02T08:58:12.349678607+02:00" level=error msg="Failed to check local etcd status for learner management: context deadline exceeded"
Jul 02 08:58:12 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:12.349+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"passthrough:///https://127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
Jul 02 08:58:12 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:12.209+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:11 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:11.879+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:07 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:07.926+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:07 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:07.484+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:05 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:05.279+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:04 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:04.823+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:03 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:03.446+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:03 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:03.350+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:02 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:02.445+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:02 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:02.349+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:58:00 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:58:00.503+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:57 k3s-server-2 k3s[11116]: time="2021-07-02T08:57:57.443766970+02:00" level=info msg="Failed to test data store connection: context deadline exceeded"
Jul 02 08:57:57 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:57.443+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"passthrough:///https://127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
Jul 02 08:57:57 k3s-server-2 k3s[11116]: time="2021-07-02T08:57:57.349184472+02:00" level=error msg="Failed to check local etcd status for learner management: context deadline exceeded"
Jul 02 08:57:57 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:57.348+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"passthrough:///https://127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
Jul 02 08:57:57 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:57.102+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:56 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:56.438+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:52 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:52.606+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:52 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:52.471+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:50 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:50.320+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:50 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:50.202+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:48 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:48.444+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:48 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:48.350+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:47 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:47.443+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:47 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:47.349+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:42 k3s-server-2 k3s[11116]: time="2021-07-02T08:57:42.441573611+02:00" level=info msg="Failed to test data store connection: context deadline exceeded"
Jul 02 08:57:42 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:42.441+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"passthrough:///https://127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
Jul 02 08:57:42 k3s-server-2 k3s[11116]: time="2021-07-02T08:57:42.349601711+02:00" level=error msg="Failed to check local etcd status for learner management: context deadline exceeded"
Jul 02 08:57:42 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:42.349+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"passthrough:///https://127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
Jul 02 08:57:41 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:41.597+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:41 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:41.246+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:37 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:37.658+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:37 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:37.107+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:35 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:35.116+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:34 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:34.901+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:33 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:33.442+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:33 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:33.349+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:32 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:32.441+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:32 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:32.348+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:28 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:28.831+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:27 k3s-server-2 k3s[11116]: time="2021-07-02T08:57:27.439456980+02:00" level=info msg="Failed to test data store connection: context deadline exceeded"
Jul 02 08:57:27 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:27.439+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"passthrough:///https://127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
Jul 02 08:57:27 k3s-server-2 k3s[11116]: time="2021-07-02T08:57:27.348826758+02:00" level=error msg="Failed to check local etcd status for learner management: context deadline exceeded"
Jul 02 08:57:27 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:27.348+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"passthrough:///https://127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
Jul 02 08:57:26 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:26.394+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:26 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:26.079+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:23 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:23.081+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:21 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:21.834+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:20 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:20.222+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:19 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:19.774+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:18 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:18.441+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:18 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:18.349+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:17 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:17.439+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:17 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:17.348+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:13 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:13.219+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:12 k3s-server-2 k3s[11116]: time="2021-07-02T08:57:12.438160159+02:00" level=info msg="Failed to test data store connection: context deadline exceeded"
Jul 02 08:57:12 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:12.438+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"passthrough:///https://127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
Jul 02 08:57:12 k3s-server-2 k3s[11116]: time="2021-07-02T08:57:12.350568393+02:00" level=error msg="Failed to check local etcd status for learner management: context deadline exceeded"
Jul 02 08:57:12 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:12.350+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"passthrough:///https://127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
Jul 02 08:57:10 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:10.880+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:10 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:10.798+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:07 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:07.442+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:06 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:06.753+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:05 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:05.004+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:04 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:04.671+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:04 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:04.374+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:03 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:03.438+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:03 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:03.350+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:02 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:02.437+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:57:02 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:57:02.349+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:56:57 k3s-server-2 k3s[11116]: time="2021-07-02T08:56:57.436472213+02:00" level=info msg="Failed to test data store connection: context deadline exceeded"
Jul 02 08:56:57 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:56:57.435+0200","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"passthrough:///https://127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
Jul 02 08:56:56 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:56:56.544+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:56:55 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:56:55.938+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:56:52 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:56:52.555+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:56:52 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:56:52.379+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:56:49 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:56:49.899+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:56:49 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:56:49.702+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:56:48 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:56:48.436+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:56:48 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:56:48.348+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:56:47 k3s-server-2 k3s[11116]: time="2021-07-02T08:56:47.435182635+02:00" level=info msg="Running kube-apiserver --advertise-address=192.168.178.3 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Jul 02 08:56:47 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:56:47.434+0200","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379   0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting..."}
Jul 02 08:56:47 k3s-server-2 k3s[11116]: {"level":"info","ts":"2021-07-02T08:56:47.433+0200","caller":"embed/etcd.go:367","msg":"closed etcd server","name":"k3s-server-2-49d51217","data-dir":"/var/lib/rancher/k3s/server/db/etcd","advertise-peer-urls":["http://localhost:2380"],"advertise-client-urls":["https://10.23.1.3:2379"]}
Jul 02 08:56:47 k3s-server-2 k3s[11116]: {"level":"info","ts":"2021-07-02T08:56:47.433+0200","caller":"embed/etcd.go:363","msg":"closing etcd server","name":"k3s-server-2-49d51217","data-dir":"/var/lib/rancher/k3s/server/db/etcd","advertise-peer-urls":["http://localhost:2380"],"advertise-client-urls":["https://10.23.1.3:2379"]}
Jul 02 08:56:47 k3s-server-2 k3s[11116]: {"level":"warn","ts":"2021-07-02T08:56:47.432+0200","caller":"etcdserver/cluster_util.go:76","msg":"failed to get cluster response","address":"https://10.23.1.2:2380/members","error":"Get \"https://10.23.1.2:2380/members\": EOF"}
Jul 02 08:56:47 k3s-server-2 k3s[11116]: {"level":"info","ts":"2021-07-02T08:56:47.411+0200","caller":"etcdserver/backend.go:80","msg":"opened backend db","path":"/var/lib/rancher/k3s/server/db/etcd/member/snap/db","took":"13.32849ms"}
Jul 02 08:56:47 k3s-server-2 k3s[11116]: {"level":"info","ts":"2021-07-02T08:56:47.397+0200","caller":"embed/etcd.go:302","msg":"starting an etcd server","etcd-version":"3.4.13","git-sha":"Not provided (use ./build instead of go build)","go-version":"go1.16.4","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":false,"name":"k3s-server-2-49d51217","data-dir":"/var/lib/rancher/k3s/server/db/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/rancher/k3s/server/db/etcd/member","force-new-cluster":false,"heartbeat-interval":"500ms","election-timeout":"5s","initial-election-tick-advance":true,"snapshot-count":100000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["http://localhost:2380"],"listen-peer-urls":["https://10.23.1.3:2380"],"advertise-client-urls":["https://10.23.1.3:2379"],"listen-client-urls":["https://10.23.1.3:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"k3s-server-1-03f2f2e6=https://10.23.1.2:2380,k3s-server-2-49d51217=https://10.23.1.3:2380","initial-cluster-state":"existing","initial-cluster-token":"etcd-cluster","quota-size-bytes":2147483648,"pre-vote":false,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","auto-compaction-mode":"","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":""}
Jul 02 08:56:47 k3s-server-2 k3s[11116]: {"level":"info","ts":"2021-07-02T08:56:47.396+0200","caller":"embed/etcd.go:127","msg":"configuring client listeners","listen-client-urls":["https://10.23.1.3:2379","https://127.0.0.1:2379"]}
Jul 02 08:56:47 k3s-server-2 k3s[11116]: {"level":"info","ts":"2021-07-02T08:56:47.395+0200","caller":"embed/etcd.go:468","msg":"starting with peer TLS","tls-info":"cert = /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt, key = /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key, trusted-ca = /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
Jul 02 08:56:47 k3s-server-2 k3s[11116]: {"level":"info","ts":"2021-07-02T08:56:47.395+0200","caller":"embed/etcd.go:117","msg":"configuring peer listeners","listen-peer-urls":["https://10.23.1.3:2380"]}
Jul 02 08:56:47 k3s-server-2 k3s[11116]: time="2021-07-02T08:56:47.391527373+02:00" level=info msg="Starting etcd for cluster [k3s-server-1-03f2f2e6=https://10.23.1.2:2380 k3s-server-2-49d51217=https://10.23.1.3:2380]"
Jul 02 08:56:47 k3s-server-2 k3s[11116]: time="2021-07-02T08:56:47.384409843+02:00" level=info msg="Adding https://10.23.1.3:2380 to etcd cluster [k3s-server-1-03f2f2e6=https://10.23.1.2:2380]"
Jul 02 08:56:47 k3s-server-2 k3s[11116]: time="2021-07-02T08:56:47.349494223+02:00" level=info msg="Active TLS secret  (ver=) (count 8): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-192.168.178.3:192.168.178.3 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=F74AC8A1C87C5BCFD3CA53DC4D60086A44474F78]"
Jul 02 08:56:47 k3s-server-2 k3s[11116]: time="2021-07-02T08:56:47.348920936+02:00" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1625208868: notBefore=2021-07-02 06:54:28 +0000 UTC notAfter=2022-07-02 06:56:47 +0000 UTC"
Jul 02 08:56:47 k3s-server-2 k3s[11116]: time="2021-07-02T08:56:47.345328540+02:00" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1625208868: notBefore=2021-07-02 06:54:28 +0000 UTC notAfter=2022-07-02 06:56:47 +0000 UTC"
Jul 02 08:56:47 k3s-server-2 k3s[11116]: time="2021-07-02T08:56:47.344231699+02:00" level=info msg="certificate CN=etcd-client signed by CN=etcd-server-ca@1625208868: notBefore=2021-07-02 06:54:28 +0000 UTC notAfter=2022-07-02 06:56:47 +0000 UTC"
Jul 02 08:56:47 k3s-server-2 k3s[11116]: time="2021-07-02T08:56:47.343111708+02:00" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1625208868: notBefore=2021-07-02 06:54:28 +0000 UTC notAfter=2022-07-02 06:56:47 +0000 UTC"
Jul 02 08:56:47 k3s-server-2 k3s[11116]: time="2021-07-02T08:56:47.341986210+02:00" level=info msg="certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1625208868: notBefore=2021-07-02 06:54:28 +0000 UTC notAfter=2022-07-02 06:56:47 +0000 UTC"
Jul 02 08:56:47 k3s-server-2 k3s[11116]: time="2021-07-02T08:56:47.340628117+02:00" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1625208868: notBefore=2021-07-02 06:54:28 +0000 UTC notAfter=2022-07-02 06:56:47 +0000 UTC"
Jul 02 08:56:47 k3s-server-2 k3s[11116]: time="2021-07-02T08:56:47.339080557+02:00" level=info msg="certificate CN=k3s-cloud-controller-manager signed by CN=k3s-client-ca@1625208868: notBefore=2021-07-02 06:54:28 +0000 UTC notAfter=2022-07-02 06:56:47 +0000 UTC"
Jul 02 08:56:47 k3s-server-2 k3s[11116]: time="2021-07-02T08:56:47.337480350+02:00" level=info msg="certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1625208868: notBefore=2021-07-02 06:54:28 +0000 UTC notAfter=2022-07-02 06:56:47 +0000 UTC"
Jul 02 08:56:47 k3s-server-2 k3s[11116]: time="2021-07-02T08:56:47.336017890+02:00" level=info msg="certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1625208868: notBefore=2021-07-02 06:54:28 +0000 UTC notAfter=2022-07-02 06:56:47 +0000 UTC"
Jul 02 08:56:47 k3s-server-2 k3s[11116]: time="2021-07-02T08:56:47.334518526+02:00" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-client-ca@1625208868: notBefore=2021-07-02 06:54:28 +0000 UTC notAfter=2022-07-02 06:56:47 +0000 UTC"
Jul 02 08:56:47 k3s-server-2 k3s[11116]: time="2021-07-02T08:56:47.332880259+02:00" level=info msg="certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1625208868: notBefore=2021-07-02 06:54:28 +0000 UTC notAfter=2022-07-02 06:56:47 +0000 UTC"
Jul 02 08:56:47 k3s-server-2 k3s[11116]: time="2021-07-02T08:56:47.331363639+02:00" level=info msg="certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1625208868: notBefore=2021-07-02 06:54:28 +0000 UTC notAfter=2022-07-02 06:56:47 +0000 UTC"
Jul 02 08:56:47 k3s-server-2 k3s[11116]: time="2021-07-02T08:56:47.329599366+02:00" level=info msg="certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1625208868: notBefore=2021-07-02 06:54:28 +0000 UTC notAfter=2022-07-02 06:56:47 +0000 UTC"
Jul 02 08:56:47 k3s-server-2 k3s[11116]: time="2021-07-02T08:56:47.311169886+02:00" level=info msg="Managed etcd cluster not yet initialized"
Jul 02 08:56:47 k3s-server-2 k3s[11116]: time="2021-07-02T08:56:47.311010226+02:00" level=warning msg="Cluster CA certificate is not trusted by the host CA bundle, but the token does not include a CA hash. Use the full token from the server's node-token file to enable Cluster CA validation."
Jul 02 08:56:47 k3s-server-2 k3s[11116]: time="2021-07-02T08:56:47.081377624+02:00" level=info msg="Starting k3s v1.21.2+k3s1 (5a67e8dc)"
Jul 02 08:56:43 k3s-server-2 k3s[11116]: time="2021-07-02T08:56:43+02:00" level=info msg="Preparing data dir /var/lib/rancher/k3s/data/57d64d4b123cea8e276484f00ab3dfa7178a00a35368aa6b43df3e3bd8ce032d"
Jul 02 08:56:43 k3s-server-2 k3s[11116]: time="2021-07-02T08:56:43+02:00" level=info msg="Acquiring lock file /var/lib/rancher/k3s/data/.lock"
Jul 02 08:56:43 k3s-server-2 sh[11113]: Failed to get unit file state for nm-cloud-setup.service: No such file or directory
Jul 02 08:56:43 k3s-server-2 sh[11112]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
Jul 02 08:56:43 k3s-server-2 systemd[1]: Starting Lightweight Kubernetes...
  

k3s-server-1.txt k3s-server-2.txt

mway-niels commented 3 years ago

In an attempt to isolate the issue I have created two new instances. Both are in the same network. There are no firewall restrictions applied to either server. The setup is as follows:

Server Public IP Private IP
temp-1 192.168.178.2 10.0.0.2
temp-2 192.168.178.3 10.0.0.3

I ran two separate tests, rebuilding the nodes in between to make sure old configuration isn't accidentally reused.

  1. Using the private network Install K3s on temp-1: curl -sfL https://get.k3s.io | sh -s - server --tls-san 10.0.0.2 --node-ip 10.0.0.2 --node-external-ip 192.168.178.2 --cluster-init --no-deploy traefik Install K3s on temp-2: curl -sfL https://get.k3s.io | sh -s - server --server https://10.0.0.2:6443 --node-ip 10.0.0.3 --node-external-ip 192.168.178.3
  2. "Default" installation using public IPs Install K3s on temp-1: curl -sfL https://get.k3s.io | sh -s - server --cluster-init --no-deploy traefik Install K3s on temp-2: curl -sfL https://get.k3s.io | sh -s - server --server https://192.168.178.2:6443

Method 2 worked as expected, method 1 threw the same errors as before (find all log files for the respective configurations attached below).

$ k3s kubectl get nodes
NAME     STATUS   ROLES                       AGE     VERSION
temp-1   Ready    control-plane,etcd,master   5m16s   v1.21.2+k3s1
temp-2   Ready    control-plane,etcd,master   116s    v1.21.2+k3s1

(when using method 2)

Interestingly enough, the Cluster-Http-Server 2021/07/02 12:16:49 http: TLS handshake error from XXX.XXX.XXX.XXX:XXXXX: remote error: tls: bad certificate appeared when joining the second server in both methods which leads me to believe the issue might not be caused by invalid firewall configuration.

Logs: Method 1: temp-1 (private) temp-2 (private)

Method 2: temp-1 (public) temp-2 (public)

mway-niels commented 3 years ago

Looking at this log segment specifically:

Jul 02 12:00:27 temp-1 k3s[922]: {"level":"warn","ts":"2021-07-02T12:00:27.712+0200","caller":"embed/config_logging.go:270","msg":"rejected connection","remote-addr":"10.0.0.3:34756","server-name":"","ip-addresses":["127.0.0.1","192.168.178.3","192.168.178.3","10.43.0.1"],"dns-names":["localhost"],"error":"tls: \"10.0.0.3\" does not match any of DNSNames [\"localhost\"] (lookup 10.0.0.3: Name does not resolve)"}
Jul 02 12:00:27 temp-1 k3s[922]: {"level":"info","ts":"2021-07-02T12:00:27.679+0200","caller":"rafthttp/stream.go:406","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fde9dd315b6d0b2","remote-peer-id":"10c2a3e8acfc74c4"}
Jul 02 12:00:27 temp-1 k3s[922]: {"level":"info","ts":"2021-07-02T12:00:27.680+0200","caller":"rafthttp/stream.go:406","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fde9dd315b6d0b2","remote-peer-id":"10c2a3e8acfc74c4"}
Jul 02 12:00:27 temp-1 k3s[922]: {"level":"info","ts":"2021-07-02T12:00:27.680+0200","caller":"etcdserver/server.go:1967","msg":"applied a configuration change through raft","local-member-id":"fde9dd315b6d0b2","raft-conf-change":"ConfChangeAddLearnerNode","raft-conf-change-node-id":"10c2a3e8acfc74c4"}
Jul 02 12:00:27 temp-1 k3s[922]: {"level":"info","ts":"2021-07-02T12:00:27.679+0200","caller":"rafthttp/transport.go:327","msg":"added remote peer","local-member-id":"fde9dd315b6d0b2","remote-peer-id":"10c2a3e8acfc74c4","remote-peer-urls":["https://10.0.0.3:2380"]}
Jul 02 12:00:27 temp-1 k3s[922]: {"level":"info","ts":"2021-07-02T12:00:27.679+0200","caller":"rafthttp/peer.go:134","msg":"started remote peer","remote-peer-id":"10c2a3e8acfc74c4"}
Jul 02 12:00:27 temp-1 k3s[922]: {"level":"info","ts":"2021-07-02T12:00:27.677+0200","caller":"rafthttp/stream.go:166","msg":"started stream writer with remote peer","local-member-id":"fde9dd315b6d0b2","remote-peer-id":"10c2a3e8acfc74c4"}
Jul 02 12:00:27 temp-1 k3s[922]: {"level":"info","ts":"2021-07-02T12:00:27.676+0200","caller":"rafthttp/stream.go:166","msg":"started stream writer with remote peer","local-member-id":"fde9dd315b6d0b2","remote-peer-id":"10c2a3e8acfc74c4"}
Jul 02 12:00:27 temp-1 k3s[922]: {"level":"info","ts":"2021-07-02T12:00:27.673+0200","caller":"rafthttp/pipeline.go:71","msg":"started HTTP pipelining with remote peer","local-member-id":"fde9dd315b6d0b2","remote-peer-id":"10c2a3e8acfc74c4"}
Jul 02 12:00:27 temp-1 k3s[922]: {"level":"info","ts":"2021-07-02T12:00:27.673+0200","caller":"rafthttp/peer.go:128","msg":"starting remote peer","remote-peer-id":"10c2a3e8acfc74c4"}
Jul 02 12:00:27 temp-1 k3s[922]: {"level":"info","ts":"2021-07-02T12:00:27.673+0200","caller":"membership/cluster.go:392","msg":"added member","cluster-id":"d159fca3fb051972","local-member-id":"fde9dd315b6d0b2","added-peer-id":"10c2a3e8acfc74c4","added-peer-peer-urls":["https://10.0.0.3:2380"]}
Jul 02 12:00:27 temp-1 k3s[922]: {"level":"info","ts":"2021-07-02T12:00:27.672+0200","caller":"raft/raft.go:1530","msg":"fde9dd315b6d0b2 switched to configuration voters=(1143524885326647474) learners=(1207707869818680516)"}
Jul 02 12:00:27 temp-1 k3s[922]: time="2021-07-02T12:00:27.611824350+02:00" level=info msg="Cluster-Http-Server 2021/07/02 12:00:27 http: TLS handshake error from 10.0.0.3:51216: remote error: tls: bad certificate"

It looks like this error tls: \"10.0.0.3\" does not match any of DNSNames [\"localhost\"] (lookup 10.0.0.3: Name does not resolve) is the root cause.

brandond commented 3 years ago

Yeah, you are expected to use hostnames, not addresses. I am honestly not sure if we have tested or support an environment set up solely with raw ipv4 addresses and no functioning name resolution.

mway-niels commented 3 years ago

I will close this issue since the original problem has been resolved and try to research the correct setup for hostname resolution.

jawabuu commented 3 years ago

@mway-niels Did you resolve this?

mway-niels commented 3 years ago

The hostname resolution issue? Unfortunately not. I've resorted to using kubeadm for bootstraping a cluster instead.

jawabuu commented 3 years ago

@mway-niels are you on k3s slack?

jawabuu commented 3 years ago

@mway-niels Your issue will be solved if you add all the node's ips to the --tls-san flag

jawabuu commented 3 years ago

Ultimately this boils down to understanding and having control over which interface etcd uses to communicate across nodes. As is most users expect that node-ip or node-external-ip is used based on whichever flag they pick. However the interface used may be another one altogether.

jawabuu commented 3 years ago

@mway-niels Follow here https://rancher-users.slack.com/archives/C3ASABBD1/p1628417813308600

mway-niels commented 2 years ago

I was able to resolve the issue by using the flags node-ip, node-external-ip and advertise-address. Adding advertise-address seems to be required if you want to communicate using the private network since the default seems to be node-external-ip if both node-ip and node-external-ip are set. I'm using the following script to initialize all nodes:

#!/bin/bash
export K3S_TOKEN="XXXXX";

TYPE="initial-server"; # OPTIONS: initial-server,additional-server,worker
PRIVATE_INET_INTERFACE="XXX";
PUBLIC_INET_INTERFACE="XXX";
CP_LB_PRIVATE_IP="XXX.XXX.XXX.XXX";
CP_LB_PUBLIC_IP="XXX.XXX.XXX.XXX";

NODE_PRIVATE_IP=$(ifconfig ${PRIVATE_INET_INTERFACE} | grep 'inet ' | awk '{print $2}');
NODE_PUBLIC_IP=$(ifconfig ${PUBLIC_INET_INTERFACE} | grep 'inet ' | awk '{print $2}');

TLS_PARAMS="--tls-san ${CP_LB_PRIVATE_IP} --tls-san ${CP_LB_PUBLIC_IP} --tls-san ${NODE_PRIVATE_IP} --tls-san ${NODE_PUBLIC_IP}";
NODE_IPS_PARAMS="--node-ip ${NODE_PRIVATE_IP} --node-external-ip ${NODE_PUBLIC_IP}";
ADVERTISE_ADDRESS_PARAMS="--advertise-address ${NODE_PRIVATE_IP}";
KUBELET_PARAMS="--kubelet-arg='cloud-provider=external'";
GLOBAL_PARAMS="${KUBELET_PARAMS} ${NODE_IPS_PARAMS}";
SERVER_PARAMS="${ADVERTISE_ADDRESS_PARAMS} --flannel-iface ${PRIVATE_INET_INTERFACE} --disable-cloud-controller --disable traefik,servicelb,local-storage";

echo 'Using the following parameters:';
echo "TYPE: $TYPE";
echo "PRIVATE_INET_INTERFACE: $PRIVATE_INET_INTERFACE";
echo "PUBLIC_INET_INTERFACE: $PUBLIC_INET_INTERFACE";
echo "CP_LB_PRIVATE_IP: $CP_LB_PRIVATE_IP";
echo "CP_LB_PUBLIC_IP: $CP_LB_PUBLIC_IP";
echo "NODE_PRIVATE_IP: $NODE_PRIVATE_IP";
echo "NODE_PUBLIC_IP: $NODE_PUBLIC_IP";
echo "TLS_PARAMS: $TLS_PARAMS";
echo "NODE_IPS_PARAMS: $NODE_IPS_PARAMS";
echo "ADVERTISE_ADDRESS_PARAMS: $ADVERTISE_ADDRESS_PARAMS";
echo "GLOBAL_PARAMS: $GLOBAL_PARAMS";
echo "SERVER_PARAMS: $SERVER_PARAMS";

if [ "${TYPE}" = 'initial-server' ]; then
    echo "Will initialize the cluster.";

    curl -sfL https://get.k3s.io | sh -s - server \
        --cluster-init \
        ${GLOBAL_PARAMS} \
        ${TLS_PARAMS} \
        ${SERVER_PARAMS};
elif [ "${TYPE}" = 'additional-server' ]; then
    echo "Will add an additional server.";

    curl -sfL https://get.k3s.io | sh -s - server \
        --server https://${CP_LB_PRIVATE_IP}:6443 \
        ${GLOBAL_PARAMS} \
        ${SERVER_PARAMS};
elif [ "${TYPE}" = 'worker' ]; then
    echo "Will add a worker.";

    curl -sfL https://get.k3s.io | sh -s - agent \
        --server https://${CP_LB_PRIVATE_IP}:6443 \
        ${GLOBAL_PARAMS};
fi