k0sproject / k0s

k0s - The Zero Friction Kubernetes
https://docs.k0sproject.io
Other
3.78k stars 365 forks source link

"E1215 stream read failure" err="rpc error: code = Canceled [...]" info message logs every second from "/usr/local/bin/k0s controller" #1355

Closed viktormohl closed 2 years ago

viktormohl commented 2 years ago

Description

Hi, I have successfully installed K8s with HA Control Plane using k0sctl and HAProxy on multiple VMs (vagrant & libvirt). Unfortunately I noticed that in /var/log/syslog the following info message is written out every second. It is logged from k0scontroller.service.

Dec 15 19:47:37 debian11 k0s[2933]: time="2021-12-15 19:47:37" level=info msg="E1215 19:47:37.483426    3100 server.go:390] \"Stream read from frontend failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
Dec 15 19:47:37 debian11 k0s[2933]: time="2021-12-15 19:47:37" level=info msg="E1215 19:47:37.484865    3100 server.go:761] \"could not get frontend client\" err=\"can't find connID 24 in the frontends[84a73998-9498-4c54-bf54-a9f72a2abb28]\" serverID=\"2426dc303c1e58ac8773a2c7903b701511057835ccf714efd26aa44fd911442b\" agentID=\"84a73998-9498-4c54-bf54-a9f72a2abb28\" connectionID=24" component=konnectivity
Dec 15 19:47:38 debian11 k0s[2933]: time="2021-12-15 19:47:38" level=info msg="E1215 19:47:38.196781    3100 server.go:669] \"stream read failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity

Questions:

Setup

bootstrap.sh

#!/bin/bash

# Enable ssh password authentication
echo "Enable ssh password authentication"
sed -i 's/^PasswordAuthentication no/PasswordAuthentication yes/' /etc/ssh/sshd_config
sed -i 's/.*PermitRootLogin.*/PermitRootLogin yes/' /etc/ssh/sshd_config
systemctl reload sshd

# Set Root password
echo "Set root password"
echo -e "admin\nadmin" | passwd root >/dev/null 2>&1

Vagrantfile

# -*- mode: ruby -*-
# vi: set ft=ruby :
ENV['VAGRANT_NO_PARALLEL'] = 'yes'
Vagrant.configure(2) do |config|
  config.vm.provision "shell", path: "bootstrap.sh"
  NodeCount = 6
  (1..NodeCount).each do |i|
    config.vm.define "debian11-vm#{i}" do |node|
      node.vm.box               = "generic/debian11"
      node.vm.box_check_update  = false
      node.vm.box_version       = "3.6.0" # https://app.vagrantup.com/generic/boxes/debian11
      node.vm.hostname          = "debian11-vm#{i}.develop.local"
      # für libvirt auskommentieren
      #node.vm.network "private_network", ip: "172.16.16.10#{i}"
      node.vm.provider :virtualbox do |v|
        v.name    = "debian11-vm#{i}"
        v.memory  = 4096
        v.cpus    = 1
      end
      node.vm.provider :libvirt do |v|
        v.nested  = true
        v.memory  = 4096
        v.cpus    = 1
      end
    end
  end
end

Provide VMs

# start VMs
vagrant up --provider libvirt

# find out ip adresses
virsh net-dhcp-leases vagrant-libvirt

 Expiry Time           MAC address         Protocol   IP address           Hostname       Client ID or DUID
----------------------------------------------------------------------------------------------------------------------------------------------------
 2021-12-15 21:17:19   52:54:00:14:72:f0   ipv4       192.168.121.157/24   debian11-vm2   ff:00:14:72:f0:00:01:00:01:29:3b:99:b1:52:54:00:12:34:56
 2021-12-15 21:17:48   52:54:00:6a:36:07   ipv4       192.168.121.90/24    debian11-vm3   ff:00:6a:36:07:00:01:00:01:29:3b:99:b1:52:54:00:12:34:56
 2021-12-15 21:18:44   52:54:00:77:5c:5c   ipv4       192.168.121.164/24   debian11-vm5   ff:00:77:5c:5c:00:01:00:01:29:3b:99:b1:52:54:00:12:34:56
 2021-12-15 21:18:15   52:54:00:a5:ae:00   ipv4       192.168.121.250/24   debian11-vm4   ff:00:a5:ae:00:00:01:00:01:29:3b:99:b1:52:54:00:12:34:56
 2021-12-15 21:16:52   52:54:00:de:51:de   ipv4       192.168.121.28/24    debian11-vm1   ff:00:de:51:de:00:01:00:01:29:3b:99:b1:52:54:00:12:34:56
 2021-12-15 21:19:13   52:54:00:fd:6e:69   ipv4       192.168.121.55/24    debian11-vm6   ff:00:fd:6e:69:00:01:00:01:29:3b:99:b1:52:54:00:12:34:56

Copy SSH-Keys

# connect to first controller node
ssh root@192.168.121.28 # password admin

# system-info
root@debian11-vm1:~# uname -a
Linux debian11-vm1 5.10.0-9-amd64 #1 SMP Debian 5.10.70-1 (2021-09-30) x86_64 GNU/Linux

root@debian11-vm1:~# hostnamectl
   Static hostname: debian11-vm1
         Icon name: computer-vm
           Chassis: vm
        Machine ID: b3a17fef12e0452caa00e5db08e30739
           Boot ID: 7a1eef3b2b034326813bf83629007c52
    Virtualization: kvm
  Operating System: Debian GNU/Linux 11 (bullseye)
            Kernel: Linux 5.10.0-9-amd64
      Architecture: x86-64

# generate ssh keys - with no password phrase
ssh-keygen -t rsa -b 2048 -f ~/.ssh/id-rsa-k0s

# copy on every node
controll plane
ssh-copy-id -i ~/.ssh/id-rsa-k0s root@192.168.121.28
ssh-copy-id -i ~/.ssh/id-rsa-k0s root@192.168.121.55
ssh-copy-id -i ~/.ssh/id-rsa-k0s root@192.168.121.90

# worker
ssh-copy-id -i ~/.ssh/id-rsa-k0s root@192.168.121.157
ssh-copy-id -i ~/.ssh/id-rsa-k0s root@192.168.121.164

# load balancer (haproxy) braucht kein ssh-key
# ssh-copy-id -i ~/.ssh/id-rsa-k0s root@192.168.121.250

# login test
ssh -i ~/.ssh/id-rsa-k0s root@192.168.121.28 # intall k0sctl on it

Configure Loadbalancer

HAProxy Configuraiont - example-configuration-haproxy

# connect to load balancer instance
ssh -i ~/.ssh/id-rsa-k0s root@192.168.121.250

# install 
apt update && apt install -y haproxy

# configure
vi /etc/haproxy/haproxy.cfg
frontend kubeAPI
    bind :6443
    mode tcp
    default_backend kubeAPI_backend
frontend konnectivity
    bind :8132
    mode tcp
    default_backend konnectivity_backend
frontend controllerJoinAPI
    bind :9443
    mode tcp
    default_backend controllerJoinAPI_backend

backend kubeAPI_backend
    mode tcp
    server k0s-controller1 192.168.121.28:6443 check check-ssl verify none
    server k0s-controller2 192.168.121.55:6443 check check-ssl verify none
    server k0s-controller3 192.168.121.90:6443 check check-ssl verify none
backend konnectivity_backend
    mode tcp
    server k0s-controller1 192.168.121.28:8132 check check-ssl verify none
    server k0s-controller2 192.168.121.55:8132 check check-ssl verify none
    server k0s-controller3 192.168.121.90:8132 check check-ssl verify none
backend controllerJoinAPI_backend
    mode tcp
    server k0s-controller1 192.168.121.28:9443 check check-ssl verify none
    server k0s-controller2 192.168.121.55:9443 check check-ssl verify none
    server k0s-controller3 192.168.121.90:9443 check check-ssl verify none

listen stats
   bind *:9000
   mode http
   stats enable
   stats uri /

Restart HAProxy to apply the configuration changes.

# restart haproxy
systemctl restart haproxy

systemctl status haproxy
● haproxy.service - HAProxy Load Balancer
     Loaded: loaded (/lib/systemd/system/haproxy.service; enabled; vendor preset: enabled)
     Active: active (running) since Tue 2021-12-14 06:28:30 UTC; 7s ago
       Docs: man:haproxy(1)
[...]

# verify configuration
apt install net-tools

netstat -nltp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:9000            0.0.0.0:*               LISTEN      2935/haproxy        
tcp        0      0 0.0.0.0:6443            0.0.0.0:*               LISTEN      2935/haproxy        
tcp        0      0 127.0.0.1:11211         0.0.0.0:*               LISTEN      455/memcached       
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1113/sshd: /usr/sbi 
tcp        0      0 0.0.0.0:25              0.0.0.0:*               LISTEN      1607/master         
tcp        0      0 0.0.0.0:9443            0.0.0.0:*               LISTEN      2935/haproxy        
tcp        0      0 0.0.0.0:8132            0.0.0.0:*               LISTEN      2935/haproxy        
tcp6       0      0 :::22                   :::*                    LISTEN      1113/sshd: /usr/sbi 
tcp6       0      0 :::25                   :::*                    LISTEN      1607/master 

Install k0sctl tool

You can execute k0sctl on any system that supports the Go language. Pre-compiled k0sctl binaries are availble on the k0sctl releases page.

k0sctl is a single binary, the instructions for downloading and installing of which are available in the k0sctl github repository.

# login into control node 1
ssh root@192.168.121.28 # password admin

# download
wget https://github.com/k0sproject/k0sctl/releases/download/v0.11.4/k0sctl-linux-x64 -O k0sctl

# make it executable
chmod +x k0sctl 

# install
mv k0sctl /usr/local/bin/

# check installation
which k0sctl
/usr/local/bin/k0sctll

k0sctl version
version: v0.11.4
commit: 3b2e58b

Cluster configuration

apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
  name: k0s-cluster
spec:
  hosts:
  - ssh:
      address: 192.168.121.28
      user: root
      port: 22
      keyPath: /root/.ssh/id-rsa-k0s
    role: controller
  - ssh:
      address: 192.168.121.55
      user: root
      port: 22
      keyPath: /root/.ssh/id-rsa-k0s
    role: controller
  - ssh:
      address: 192.168.121.90
      user: root
      port: 22
      keyPath: /root/.ssh/id-rsa-k0s
    role: controller
  - ssh:
      address: 192.168.121.157
      user: root
      port: 22
      keyPath: /root/.ssh/id-rsa-k0s
    role: worker
  - ssh:
      address: 192.168.121.164
      user: root
      port: 22
      keyPath: /root/.ssh/id-rsa-k0s
    role: worker
  k0s:
    version: 1.22.4+k0s.2
    config:
        spec:
          api:
            externalAddress: 192.168.121.250 # ip address of load balancer
            sans:
            - 192.168.121.250

Deploy Cluster

k0sctl apply --config k0sctl.yaml

⠀⣿⣿⡇⠀⠀⢀⣴⣾⣿⠟⠁⢸⣿⣿⣿⣿⣿⣿⣿⡿⠛⠁⠀⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀█████████ █████████ ███
⠀⣿⣿⡇⣠⣶⣿⡿⠋⠀⠀⠀⢸⣿⡇⠀⠀⠀⣠⠀⠀⢀⣠⡆⢸⣿⣿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀███          ███    ███
⠀⣿⣿⣿⣿⣟⠋⠀⠀⠀⠀⠀⢸⣿⡇⠀⢰⣾⣿⠀⠀⣿⣿⡇⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀███          ███    ███
⠀⣿⣿⡏⠻⣿⣷⣤⡀⠀⠀⠀⠸⠛⠁⠀⠸⠋⠁⠀⠀⣿⣿⡇⠈⠉⠉⠉⠉⠉⠉⠉⠉⢹⣿⣿⠀███          ███    ███
⠀⣿⣿⡇⠀⠀⠙⢿⣿⣦⣀⠀⠀⠀⣠⣶⣶⣶⣶⣶⣶⣿⣿⡇⢰⣶⣶⣶⣶⣶⣶⣶⣶⣾⣿⣿⠀█████████    ███    ██████████

k0sctl v0.11.4 Copyright 2021, k0sctl authors.
Anonymized telemetry of usage will be sent to the authors.
By continuing to use k0sctl you agree to these terms:
https://k0sproject.io/licenses/eula
INFO ==> Running phase: Connect to hosts 
INFO [ssh] 192.168.121.28:22: connected           
INFO [ssh] 192.168.121.90:22: connected           
INFO [ssh] 192.168.121.164:22: connected          
INFO [ssh] 192.168.121.157:22: connected          
INFO [ssh] 192.168.121.55:22: connected           
INFO ==> Running phase: Detect host operating systems 
INFO [ssh] 192.168.121.28:22: is running Debian GNU/Linux 11 (bullseye) 
INFO [ssh] 192.168.121.55:22: is running Debian GNU/Linux 11 (bullseye) 
INFO [ssh] 192.168.121.90:22: is running Debian GNU/Linux 11 (bullseye) 
INFO [ssh] 192.168.121.157:22: is running Debian GNU/Linux 11 (bullseye) 
INFO [ssh] 192.168.121.164:22: is running Debian GNU/Linux 11 (bullseye) 
INFO ==> Running phase: Prepare hosts    
INFO [ssh] 192.168.121.164:22: installing packages (iptables) 
INFO [ssh] 192.168.121.157:22: installing packages (iptables) 
INFO ==> Running phase: Gather host facts 
INFO [ssh] 192.168.121.28:22: using debian11-vm1 as hostname 
INFO [ssh] 192.168.121.55:22: using debian11-vm6 as hostname 
INFO [ssh] 192.168.121.90:22: using debian11-vm3 as hostname 
INFO [ssh] 192.168.121.164:22: using debian11-vm5 as hostname 
INFO [ssh] 192.168.121.157:22: using debian11-vm2 as hostname 
INFO [ssh] 192.168.121.28:22: discovered eth0 as private interface 
INFO [ssh] 192.168.121.55:22: discovered eth0 as private interface 
INFO [ssh] 192.168.121.90:22: discovered eth0 as private interface 
INFO [ssh] 192.168.121.164:22: discovered eth0 as private interface 
INFO [ssh] 192.168.121.157:22: discovered eth0 as private interface 
INFO ==> Running phase: Validate hosts   
INFO ==> Running phase: Gather k0s facts 
INFO ==> Running phase: Validate facts   
INFO ==> Running phase: Download k0s on hosts 
INFO [ssh] 192.168.121.28:22: downloading k0s 1.22.4+k0s.2 
INFO [ssh] 192.168.121.55:22: downloading k0s 1.22.4+k0s.2 
INFO [ssh] 192.168.121.90:22: downloading k0s 1.22.4+k0s.2 
INFO [ssh] 192.168.121.157:22: downloading k0s 1.22.4+k0s.2 
INFO [ssh] 192.168.121.164:22: downloading k0s 1.22.4+k0s.2 
INFO ==> Running phase: Configure k0s    
INFO [ssh] 192.168.121.28:22: validating configuration 
INFO [ssh] 192.168.121.90:22: validating configuration 
INFO [ssh] 192.168.121.55:22: validating configuration 
INFO [ssh] 192.168.121.90:22: configuration was changed 
INFO [ssh] 192.168.121.28:22: configuration was changed 
INFO [ssh] 192.168.121.55:22: configuration was changed 
INFO ==> Running phase: Initialize the k0s cluster 
INFO [ssh] 192.168.121.28:22: installing k0s controller 
INFO [ssh] 192.168.121.28:22: waiting for the k0s service to start 
INFO [ssh] 192.168.121.28:22: waiting for kubernetes api to respond 
INFO ==> Running phase: Install controllers 
INFO [ssh] 192.168.121.28:22: generating token    
INFO [ssh] 192.168.121.55:22: writing join token  
INFO [ssh] 192.168.121.55:22: installing k0s controller 
INFO [ssh] 192.168.121.55:22: starting service    
INFO [ssh] 192.168.121.55:22: waiting for the k0s service to start 
INFO [ssh] 192.168.121.55:22: waiting for kubernetes api to respond 
INFO [ssh] 192.168.121.28:22: generating token    
INFO [ssh] 192.168.121.90:22: writing join token  
INFO [ssh] 192.168.121.90:22: installing k0s controller 
INFO [ssh] 192.168.121.90:22: starting service    
INFO [ssh] 192.168.121.90:22: waiting for the k0s service to start 
INFO [ssh] 192.168.121.90:22: waiting for kubernetes api to respond 
INFO ==> Running phase: Install workers  
INFO [ssh] 192.168.121.157:22: validating api connection to https://192.168.121.250:6443 
INFO [ssh] 192.168.121.164:22: validating api connection to https://192.168.121.250:6443 
INFO [ssh] 192.168.121.28:22: generating token    
INFO [ssh] 192.168.121.157:22: writing join token 
INFO [ssh] 192.168.121.164:22: writing join token 
INFO [ssh] 192.168.121.157:22: installing k0s worker 
INFO [ssh] 192.168.121.164:22: installing k0s worker 
INFO [ssh] 192.168.121.164:22: starting service   
INFO [ssh] 192.168.121.157:22: starting service   
INFO [ssh] 192.168.121.164:22: waiting for node to become ready 
INFO [ssh] 192.168.121.157:22: waiting for node to become ready 
INFO ==> Running phase: Disconnect from hosts 
INFO ==> Finished in 3m11s               
INFO k0s cluster version 1.22.4+k0s.2 is now installed 
INFO Tip: To access the cluster you can now fetch the admin kubeconfig using: 
INFO      k0sctl kubeconfig                                    

Get Kube config file

# get kube config
mkdir ~/.kube
k0sctl kubeconfig --config k0sctl.yaml > ~/.kube/config

Node, Pod and SVC

# kubectl in k0s on controller and worker nodes!
k0s kubectl get node,pod,svc -A -o wide
NAME                STATUS   ROLES    AGE     VERSION       INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION   CONTAINER-RUNTIME
node/debian11-vm2   Ready    <none>   9m52s   v1.22.4+k0s   192.168.121.157   <none>        Debian GNU/Linux 11 (bullseye)   5.10.0-9-amd64   containerd://1.5.8
node/debian11-vm5   Ready    <none>   9m51s   v1.22.4+k0s   192.168.121.164   <none>        Debian GNU/Linux 11 (bullseye)   5.10.0-9-amd64   containerd://1.5.8

NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE     IP                NODE           NOMINATED NODE   READINESS GATES
kube-system   pod/coredns-77b4ff5f78-8cvdd          1/1     Running   0          9m43s   10.244.0.2        debian11-vm2   <none>           <none>
kube-system   pod/coredns-77b4ff5f78-bstf5          1/1     Running   0          10m     10.244.0.4        debian11-vm2   <none>           <none>
kube-system   pod/konnectivity-agent-qjwx6          1/1     Running   0          9m11s   10.244.0.5        debian11-vm2   <none>           <none>
kube-system   pod/konnectivity-agent-whrv5          1/1     Running   0          9m11s   10.244.1.2        debian11-vm5   <none>           <none>
kube-system   pod/kube-proxy-nl98j                  1/1     Running   0          9m51s   192.168.121.164   debian11-vm5   <none>           <none>
kube-system   pod/kube-proxy-wm6wg                  1/1     Running   0          9m52s   192.168.121.157   debian11-vm2   <none>           <none>
kube-system   pod/kube-router-ss2s8                 1/1     Running   0          9m51s   192.168.121.164   debian11-vm5   <none>           <none>
kube-system   pod/kube-router-zk4jm                 1/1     Running   0          9m52s   192.168.121.157   debian11-vm2   <none>           <none>
kube-system   pod/metrics-server-5b898fd875-rfmfn   1/1     Running   0          9m51s   10.244.0.3        debian11-vm2   <none>           <none>

NAMESPACE     NAME                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE     SELECTOR
default       service/kubernetes       ClusterIP   10.96.0.1        <none>        443/TCP                  10m     <none>
kube-system   service/kube-dns         ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP   10m     k8s-app=kube-dns
kube-system   service/metrics-server   ClusterIP   10.111.241.206   <none>        443/TCP                  9m51s   k8s-app=metrics-server

k0scontroller.service

# load status
k0s status
Version: v1.22.4+k0s.2
Process ID: 2933
Role: controller
Workloads: false

systemctl status k0scontroller.service
● k0scontroller.service - k0s - Zero Friction Kubernetes
     Loaded: loaded (/etc/systemd/system/k0scontroller.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2021-12-15 19:38:57 UTC; 7min ago
       Docs: https://docs.k0sproject.io
   Main PID: 2933 (k0s)
      Tasks: 65
     Memory: 724.0M
        CPU: 1min 3.736s
     CGroup: /system.slice/k0scontroller.service
             ├─2933 /usr/local/bin/k0s controller --config=/etc/k0s/k0s.yaml
             ├─2954 /var/lib/k0s/bin/etcd --peer-trusted-ca-file=/var/lib/k0s/pki/etcd/ca.crt --peer-cert-file=/var/lib/k0s/pki/etcd/peer.crt --advertise-client-urls=https://127.0.0.1:2379 --enable-pprof=false --listen-peer-urls=https://192.168.121.28:>
             ├─2962 /var/lib/k0s/bin/kube-apiserver --requestheader-username-headers=X-Remote-User --profiling=false --requestheader-group-headers=X-Remote-Group --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM>
             ├─2987 /usr/local/bin/k0s api --config=/etc/k0s/k0s.yaml --data-dir=/var/lib/k0s
             ├─3023 /var/lib/k0s/bin/kube-scheduler --authorization-kubeconfig=/var/lib/k0s/pki/scheduler.conf --kubeconfig=/var/lib/k0s/pki/scheduler.conf --v=1 --bind-address=127.0.0.1 --leader-elect=true --profiling=false --authentication-kubeconfig>
             ├─3024 /var/lib/k0s/bin/kube-controller-manager --controllers=*,bootstrapsigner,tokencleaner --leader-elect=true --kubeconfig=/var/lib/k0s/pki/ccm.conf --cluster-signing-key-file=/var/lib/k0s/pki/ca.key --requestheader-client-ca-file=/var/>
             └─3100 /var/lib/k0s/bin/konnectivity-server --uds-name=/run/k0s/konnectivity-server/konnectivity-server.sock --server-count=3 --server-port=0 --stderrthreshold=1 --enable-profiling=false --agent-service-account=konnectivity-agent --authent>

Dec 15 19:46:02 debian11-vm1 k0s[2933]: time="2021-12-15 19:46:02" level=info msg="E1215 19:46:02.894202    3100 server.go:390] \"Stream read from frontend failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
Dec 15 19:46:02 debian11-vm1 k0s[2933]: time="2021-12-15 19:46:02" level=info msg="E1215 19:46:02.894975    3100 server.go:761] \"could not get frontend client\" err=\"can't find connID 20 in the frontends[84a73998-9498-4c54-bf54-a9f72a2abb28]\" server>
Dec 15 19:46:03 debian11-vm1 k0s[2933]: time="2021-12-15 19:46:03" level=info msg="E1215 19:46:03.228136    3100 server.go:669] \"stream read failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
Dec 15 19:46:04 debian11-vm1 k0s[2933]: time="2021-12-15 19:46:04" level=info msg="E1215 19:46:04.506097    3100 server.go:669] \"stream read failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
Dec 15 19:46:06 debian11-vm1 k0s[2933]: time="2021-12-15 19:46:06" level=info msg="E1215 19:46:06.485477    3100 server.go:669] \"stream read failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
Dec 15 19:46:07 debian11-vm1 k0s[2933]: time="2021-12-15 19:46:07" level=info msg="current cfg matches existing, not gonna do anything" component=coredns
Dec 15 19:46:07 debian11-vm1 k0s[2933]: time="2021-12-15 19:46:07" level=info msg="E1215 19:46:07.701654    3100 server.go:669] \"stream read failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
Dec 15 19:46:09 debian11-vm1 k0s[2933]: time="2021-12-15 19:46:09" level=info msg="E1215 19:46:09.772420    3100 server.go:669] \"stream read failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
Dec 15 19:46:10 debian11-vm1 k0s[2933]: time="2021-12-15 19:46:10" level=info msg="E1215 19:46:10.917620    3100 server.go:669] \"stream read failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
Dec 15 19:46:13 debian11-vm1 k0s[2933]: time="2021-12-15 19:46:13" level=info msg="E1215 19:46:13.054185    3100 server.go:669] \"stream read failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity

tail -f /var/log/syslog 
Dec 15 19:47:37 debian11 k0s[2933]: time="2021-12-15 19:47:37" level=info msg="current cfg matches existing, not gonna do anything" component=coredns
Dec 15 19:47:37 debian11 k0s[2933]: time="2021-12-15 19:47:37" level=info msg="E1215 19:47:37.483426    3100 server.go:390] \"Stream read from frontend failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
Dec 15 19:47:37 debian11 k0s[2933]: time="2021-12-15 19:47:37" level=info msg="E1215 19:47:37.484865    3100 server.go:761] \"could not get frontend client\" err=\"can't find connID 24 in the frontends[84a73998-9498-4c54-bf54-a9f72a2abb28]\" serverID=\"2426dc303c1e58ac8773a2c7903b701511057835ccf714efd26aa44fd911442b\" agentID=\"84a73998-9498-4c54-bf54-a9f72a2abb28\" connectionID=24" component=konnectivity
Dec 15 19:47:38 debian11 k0s[2933]: time="2021-12-15 19:47:38" level=info msg="E1215 19:47:38.196781    3100 server.go:669] \"stream read failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
Dec 15 19:47:40 debian11 k0s[2933]: time="2021-12-15 19:47:40" level=info msg="E1215 19:47:40.335966    3100 server.go:669] \"stream read failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
Dec 15 19:47:41 debian11 k0s[2933]: time="2021-12-15 19:47:41" level=info msg="E1215 19:47:41.448213    3100 server.go:669] \"stream read failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
Dec 15 19:47:41 debian11 k0s[2933]: time="2021-12-15 19:47:41" level=info msg="I1215 19:47:41.580630    2962 cacher.go:799] cacher (*coordination.Lease): 3 objects queued in incoming channel." component=kube-apiserver
Dec 15 19:47:41 debian11 k0s[2933]: time="2021-12-15 19:47:41" level=info msg="I1215 19:47:41.581015    2962 cacher.go:799] cacher (*coordination.Lease): 4 objects queued in incoming channel." component=kube-apiserver
Dec 15 19:47:43 debian11 k0s[2933]: time="2021-12-15 19:47:43" level=info msg="E1215 19:47:43.554407    3100 server.go:669] \"stream read failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
Dec 15 19:47:44 debian11 k0s[2933]: time="2021-12-15 19:47:44" level=info msg="E1215 19:47:44.683001    3100 server.go:669] \"stream read failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
Dec 15 19:47:46 debian11 k0s[2933]: time="2021-12-15 19:47:46" level=info msg="E1215 19:47:46.796998    3100 server.go:669] \"stream read failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
Dec 15 19:47:47 debian11 k0s[2933]: time="2021-12-15 19:47:47" level=info msg="current cfg matches existing, not gonna do anything" component=coredns
Dec 15 19:47:47 debian11 k0s[2933]: time="2021-12-15 19:47:47" level=info msg="E1215 19:47:47.860500    3100 server.go:669] \"stream read failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
Dec 15 19:47:50 debian11 k0s[2933]: time="2021-12-15 19:47:50" level=info msg="E1215 19:47:50.005857    3100 server.go:669] \"stream read failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
Dec 15 19:47:51 debian11 k0s[2933]: time="2021-12-15 19:47:51" level=info msg="E1215 19:47:51.126529    3100 server.go:669] \"stream read failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
Dec 15 19:47:53 debian11 k0s[2933]: time="2021-12-15 19:47:53" level=info msg="E1215 19:47:53.191035    3100 server.go:669] \"stream read failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
Dec 15 19:47:54 debian11 k0s[2933]: time="2021-12-15 19:47:54" level=info msg="E1215 19:47:54.302069    3100 server.go:669] \"stream read failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity
Dec 15 19:47:56 debian11 k0s[2933]: time="2021-12-15 19:47:56" level=info msg="E1215 19:47:56.422853    3100 server.go:669] \"stream read failure\" err=\"rpc error: code = Canceled desc = context canceled\"" component=konnectivity

Uninstall K8s

# wipe all nodes - destroy k8s
k0sctl reset --config k0sctl.yaml

k0sctl reset --config k0sctl.yaml
k0sctl v0.11.4 Copyright 2021, k0sctl authors.
Anonymized telemetry of usage will be sent to the authors.
By continuing to use k0sctl you agree to these terms:
https://k0sproject.io/licenses/eula
? Going to reset all of the hosts, which will destroy all configuration and data, Are you sure? Yes
INFO ==> Running phase: Connect to hosts 
INFO [ssh] 192.168.121.55:22: connected           
INFO [ssh] 192.168.121.28:22: connected           
INFO [ssh] 192.168.121.90:22: connected           
INFO [ssh] 192.168.121.164:22: connected          
INFO [ssh] 192.168.121.157:22: connected          
INFO ==> Running phase: Detect host operating systems 
INFO [ssh] 192.168.121.28:22: is running Debian GNU/Linux 11 (bullseye) 
INFO [ssh] 192.168.121.55:22: is running Debian GNU/Linux 11 (bullseye) 
INFO [ssh] 192.168.121.90:22: is running Debian GNU/Linux 11 (bullseye) 
INFO [ssh] 192.168.121.157:22: is running Debian GNU/Linux 11 (bullseye) 
INFO [ssh] 192.168.121.164:22: is running Debian GNU/Linux 11 (bullseye) 
INFO ==> Running phase: Prepare hosts    
INFO ==> Running phase: Gather k0s facts 
INFO [ssh] 192.168.121.55:22: is running k0s controller version 1.22.4+k0s.2 
INFO [ssh] 192.168.121.28:22: is running k0s controller version 1.22.4+k0s.2 
INFO [ssh] 192.168.121.90:22: is running k0s controller version 1.22.4+k0s.2 
INFO [ssh] 192.168.121.164:22: is running k0s worker version 1.22.4+k0s.2 
INFO [ssh] 192.168.121.28:22: checking if worker  has joined 
INFO [ssh] 192.168.121.157:22: is running k0s worker version 1.22.4+k0s.2 
INFO [ssh] 192.168.121.28:22: checking if worker  has joined 
INFO ==> Running phase: Reset hosts      
INFO [ssh] 192.168.121.28:22: cleaning up service environment 
INFO [ssh] 192.168.121.55:22: cleaning up service environment 
INFO [ssh] 192.168.121.90:22: cleaning up service environment 
INFO [ssh] 192.168.121.157:22: cleaning up service environment 
INFO [ssh] 192.168.121.164:22: cleaning up service environment 
INFO [ssh] 192.168.121.157:22: stopping k0s       
INFO [ssh] 192.168.121.55:22: stopping k0s        
INFO [ssh] 192.168.121.90:22: stopping k0s        
INFO [ssh] 192.168.121.164:22: stopping k0s       
INFO [ssh] 192.168.121.28:22: stopping k0s        
INFO [ssh] 192.168.121.157:22: waiting for k0s to stop 
INFO [ssh] 192.168.121.164:22: waiting for k0s to stop 
INFO [ssh] 192.168.121.157:22: running k0s reset  
INFO [ssh] 192.168.121.164:22: running k0s reset  
INFO [ssh] 192.168.121.55:22: waiting for k0s to stop 
INFO [ssh] 192.168.121.55:22: running k0s reset   
INFO [ssh] 192.168.121.90:22: waiting for k0s to stop 
INFO [ssh] 192.168.121.90:22: running k0s reset   
INFO [ssh] 192.168.121.28:22: waiting for k0s to stop 
INFO [ssh] 192.168.121.28:22: running k0s reset   
INFO ==> Running phase: Disconnect from hosts 
INFO ==> Finished in 15s                 

# remove all VMs
vagrant destroy -f

Best regards

mikhail-sakhnov commented 2 years ago

@viktormohl hello! thanks for the great description.

Do you see any issues with the cluster work?

is this a misconfiguration of mine or a bug?

Up to my understanding, this is a noise logging from one of the upstream components (konnectivity).

is it critical?

it should not be critical, unless you observing some issues with the cluster. If so, we need to debug it further, by checking your k0s.yaml and other configurations.

if not critical, is it possible to redirect the log output to a separate log file, so that the /var/log/syslog is not spammed full?

Do you use k0s install ? than you can tune the systemd unit file accordingly.

is it possible to use VIP (Virtual IP) instead of HAProxy? Is there a manual for this?

Generally speaking nothing prevents you from using any LB instead of haproxy, but we can't cover all of those. We would appreciate contributions on this, if you want we can sync up in the Lens slack and I can help you somehow.

github-actions[bot] commented 2 years ago

The issue is marked as stale since no activity has been recorded in 30 days

nemmeviu commented 2 years ago

+1