Closed hacker-h closed 4 years ago
This is probably related to https://github.com/f0cal/google-coral/issues/55 and https://github.com/f0cal/google-coral/issues/56 However the error message of k3s server should be clarified
I'm having this same problem on 1.0.0 and 0.10.2. I had to backtrack to 0.9.1 to get past this. For anyone that has this problem the command is this curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v0.9.1 sh -
I do not have a Google Coral, I am on a Jetson TX2 Nano. I was running a Nano image from August (lost the version details), and thought I would try the latest after running into this bug - so I am on the Nano JetPack v4.2.2 now - Linux jetson 4.9.140-tegra #1 SMP PREEMPT Sat Oct 19 15:54:06 PDT 2019 aarch64 aarch64 aarch64 GNU/Linux
Distributor ID: Ubuntu Description: Ubuntu 18.04.3 LTS Release: 18.04 Codename: bionic
Also experiencing this error on nano environment. And 0.9.1 is working fine.
is there any hints where is this problem introduced so as we can adopt GA version?
Of interesting note is:
Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache.
and:
Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
Would you mind sharing the output of lsmod
and df -h
?
Sorry, those errors are probably normal. The output of lsmod
might help tho, it sounds like some kind of kernel issue.
I have the same issue but with unraid OS distribution 6.8, probably we miss some kernel modules, if you tell me what I'm missing I will try to build them.
Module Size Used by
wireguard 208896 0
ip6_udp_tunnel 16384 1 wireguard
udp_tunnel 16384 1 wireguard
ccp 69632 0
xt_nat 16384 10
veth 20480 0
dm_mod 110592 1
dax 20480 1 dm_mod
xt_CHECKSUM 16384 1
ipt_REJECT 16384 7
ip6table_mangle 16384 1
ip6table_nat 16384 1
nf_nat_ipv6 16384 1 ip6table_nat
iptable_mangle 16384 1
ip6table_filter 16384 1
ip6_tables 24576 3 ip6table_filter,ip6table_nat,ip6table_mangle
vhost_net 24576 2
tun 36864 6 vhost_net
vhost 32768 1 vhost_net
tap 20480 1 vhost_net
ipt_MASQUERADE 16384 6
iptable_filter 16384 1
iptable_nat 16384 1
nf_nat_ipv4 16384 2 ipt_MASQUERADE,iptable_nat
nf_nat 24576 3 nf_nat_ipv6,nf_nat_ipv4,xt_nat
ip_tables 24576 3 iptable_filter,iptable_nat,iptable_mangle
xfs 663552 27
nfsd 90112 11
lockd 73728 1 nfsd
grace 16384 1 lockd
sunrpc 204800 15 nfsd,lockd
md_mod 49152 27
ipmi_devintf 16384 0
bonding 110592 0
igb 172032 0
i2c_algo_bit 16384 1 igb
sb_edac 24576 0
x86_pkg_temp_thermal 16384 0
intel_powerclamp 16384 0
coretemp 16384 0
kvm_intel 196608 52
kvm 380928 1 kvm_intel
crct10dif_pclmul 16384 0
crc32_pclmul 16384 0
crc32c_intel 24576 0
ghash_clmulni_intel 16384 0
pcbc 16384 0
aesni_intel 200704 0
aes_x86_64 20480 1 aesni_intel
crypto_simd 16384 1 aesni_intel
cryptd 20480 3 crypto_simd,ghash_clmulni_intel,aesni_intel
glue_helper 16384 1 aesni_intel
isci 98304 0
libsas 69632 1 isci
ipmi_ssif 24576 0
i2c_i801 24576 0
i2c_core 40960 4 i2c_algo_bit,igb,i2c_i801,ipmi_ssif
intel_cstate 16384 0
intel_uncore 102400 0
intel_rapl_perf 16384 0
ahci 40960 1
scsi_transport_sas 32768 2 isci,libsas
aacraid 110592 28
libahci 28672 1 ahci
wmi 20480 0
pcc_cpufreq 16384 0
ipmi_si 53248 0
button 16384 0
rootfs 24G 1.2G 23G 5% /
tmpfs 32M 608K 32M 2% /run
devtmpfs 24G 0 24G 0% /dev
tmpfs 24G 0 24G 0% /dev/shm
cgroup_root 8.0M 0 8.0M 0% /sys/fs/cgroup
tmpfs 128M 2.8M 126M 3% /var/log
/dev/sda1 15G 715M 14G 5% /boot
/dev/loop0 9.0M 9.0M 0 100% /lib/modules
/dev/loop1 5.9M 5.9M 0 100% /lib/firmware
tmpfs 1.0M 0 1.0M 0% /mnt/disks
xxxx here personal disks xxxxx
/dev/sdad1 699G 338G 360G 49% /mnt/cache
/dev/loop2 20G 6.1G 13G 33% /var/lib/docker
shm 64M 0 64M 0% /var/lib/docker/containers/25b88a33e9f487743dc4f2ac6e3cefbee362b16308a3d241f915354ada96e374/mounts/shm
/dev/loop3 1.0G 18M 904M 2% /etc/libvirt
tmpfs 24G 8.0K 24G 1% /var/lib/kubelet/pods/acb69046-9770-4c60-93d5-beb874cb8f5e/volumes/kubernetes.io~secret/cattle-credentials
tmpfs 24G 12K 24G 1% /var/lib/kubelet/pods/acb69046-9770-4c60-93d5-beb874cb8f5e/volumes/kubernetes.io~secret/cattle-token-q9xzj
@segator did you try to run the kubernetes preflight checks e.g. with kubeadm for more information?
No sorry I didn't tried this is the report, anyway the errors mentioned should not have any effect, unraid OS doesn't have systemctl is an ultralight os that runs on USB pendrive/ram. the preflight also mark as error the docker graph btrfs, but this shoudn't be a problem. I just tried another node (non unraid) with btrfs and works. I also want to mentioned I tried k3d and same exactly issue.
kubeadm init
W1214 21:57:35.321484 32465 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1214 21:57:35.321543 32465 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
[WARNING Firewalld]: no supported init system detected, skipping checking for services
[WARNING Service-Docker]: no supported init system detected, skipping checking for services
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.19.88-Unraid
CONFIG_NAMESPACES: enabled
CONFIG_NET_NS: enabled
CONFIG_PID_NS: enabled
CONFIG_IPC_NS: enabled
CONFIG_UTS_NS: enabled
CONFIG_CGROUPS: enabled
CONFIG_CGROUP_CPUACCT: enabled
CONFIG_CGROUP_DEVICE: enabled
CONFIG_CGROUP_FREEZER: enabled
CONFIG_CGROUP_SCHED: enabled
CONFIG_CPUSETS: enabled
CONFIG_MEMCG: enabled
CONFIG_INET: enabled
CONFIG_EXT4_FS: enabled (as module)
CONFIG_PROC_FS: enabled
CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled (as module)
CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
CONFIG_OVERLAY_FS: enabled
CONFIG_AUFS_FS: not set - Required for aufs.
CONFIG_BLK_DEV_DM: enabled (as module)
DOCKER_VERSION: 19.03.5
DOCKER_GRAPH_DRIVER: btrfs
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[WARNING Service-Kubelet]: no supported init system detected, skipping checking for services
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR SystemVerification]: unsupported graph driver: btrfs
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
I just notice my error is a little bit diferent
ipset v7.1: Kernel error received: Invalid argument
ONE thing is that is not happen on 0.9.1. Missing some kernel modules seems not a right way at the moment.
I got this working loading ip_set & xt_conntrack and some ip_vs modules, but anyway there are some errors but seems everyhing is working on unraid now :) But it doesn't work on docker, it seems the docker trying to go to /proc and get some fails but are nto related with this issue.
$ k3s -v
k3s version v1.17.0+k3s.1 (0f644650)
I am running into this problem as well, however I am using k3s agent. Master runs fine (different hardware).
INFO[2020-01-07T05:31:05.082368999Z] Starting k3s agent v1.0.1 (e94a3c60)
INFO[2020-01-07T05:31:05.083576112Z] module overlay was already loaded
WARN[2020-01-07T05:31:05.113123581Z] failed to start nf_conntrack module
WARN[2020-01-07T05:31:05.142718687Z] failed to start br_netfilter module
WARN[2020-01-07T05:31:05.143606876Z] failed to write value 1 at /proc/sys/net/ipv6/conf/all/forwarding: open /proc/sys/net/ipv6/conf/all/forwarding: no such file or directory
WARN[2020-01-07T05:31:05.144115137Z] failed to write value 1 at /proc/sys/net/bridge/bridge-nf-call-iptables: open /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory
WARN[2020-01-07T05:31:05.144604616Z] failed to write value 1 at /proc/sys/net/bridge/bridge-nf-call-ip6tables: open /proc/sys/net/bridge/bridge-nf-call-ip6tables: no such file or directory
INFO[2020-01-07T05:31:05.146224571Z] Running load balancer 127.0.0.1:34309 -> [192.168.1.80:6443]
INFO[2020-01-07T05:31:06.682951837Z] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log
INFO[2020-01-07T05:31:06.683324510Z] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd
INFO[2020-01-07T05:31:06.690110701Z] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused"
INFO[2020-01-07T05:31:07.693102497Z] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused"
INFO[2020-01-07T05:31:08.694949604Z] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused"
INFO[2020-01-07T05:31:09.932591988Z] Connecting to proxy url="wss://192.168.1.80:6443/v1-k3s/connect"
INFO[2020-01-07T05:31:10.066665951Z] Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/1bb4c288d71e7cc82ab5b8bf4b45cdceeef71d00f5bf24fefe52d1cfc2944d3b/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=pynq0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/systemd/user.slice/user-1000.slice --node-labels= --read-only-port=0 --resolv-conf=/etc/resolv.conf --runtime-cgroups=/systemd/user.slice/user-1000.slice --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key
INFO[2020-01-07T05:31:10.095630515Z] Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=pynq0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables
W0107 05:31:10.097487 2853 server.go:208] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
I0107 05:31:10.194755 2853 server.go:406] Version: v1.16.3-k3s.2
W0107 05:31:10.255805 2853 proxier.go:597] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
I0107 05:31:10.351165 2853 flannel.go:91] Determining IP address of default interface
I0107 05:31:10.365027 2853 flannel.go:104] Using interface with name eth0 and address 192.168.1.248
E0107 05:31:10.380651 2853 machine.go:288] failed to get cache information for node 0: open /sys/devices/system/cpu/cpu0/cache: no such file or directory
INFO[2020-01-07T05:31:10.385836462Z] addresses labels has already been set successfully on node: pynq0
W0107 05:31:10.362271 2853 proxier.go:597] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
I0107 05:31:10.401012 2853 server.go:637] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
I0107 05:31:10.429811 2853 container_manager_linux.go:272] container manager verified user specified cgroup-root exists: []
I0107 05:31:10.426268 2853 kube.go:117] Waiting 10m0s for node controller to sync
I0107 05:31:10.426544 2853 kube.go:300] Starting kube subnet manager
I0107 05:31:10.431007 2853 container_manager_linux.go:277] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/systemd/user.slice/user-1000.slice SystemCgroupsName: KubeletCgroupsName:/systemd/user.slice/user-1000.slice ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
I0107 05:31:10.475979 2853 fake_topology_manager.go:29] [fake topologymanager] NewFakeManager
I0107 05:31:10.480498 2853 container_manager_linux.go:312] Creating device plugin manager: true
I0107 05:31:10.481199 2853 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{kubelet.sock /var/lib/rancher/k3s/agent/kubelet/device-plugins/ map[] {0 0} <nil> {{} [0 0 0]} 0x213dc84 0x585bfb0 0x213e3ac map[] map[] map[] map[] map[] 0x6a99ac0 [] 0x585bfb0}
I0107 05:31:10.481942 2853 state_mem.go:36] [cpumanager] initializing new in-memory state store
I0107 05:31:10.482927 2853 state_mem.go:84] [cpumanager] updated default cpuset: ""
I0107 05:31:10.483770 2853 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
I0107 05:31:10.484269 2853 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{{0 0} 0x585bfb0 10000000000 0x6acbb80 <nil> <nil> <nil> <nil> map[]}
I0107 05:31:10.485238 2853 kubelet.go:312] Watching apiserver
W0107 05:31:10.618741 2853 proxier.go:597] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0107 05:31:10.861013 2853 proxier.go:597] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
I0107 05:31:11.023912 2853 kuberuntime_manager.go:207] Container runtime containerd initialized, version: v1.3.0-k3s.5, apiVersion: v1alpha2
I0107 05:31:11.103158 2853 server.go:1066] Started kubelet
W0107 05:31:11.027097 2853 proxier.go:597] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
I0107 05:31:11.150660 2853 server.go:145] Starting to listen on 0.0.0.0:10250
I0107 05:31:11.223421 2853 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
I0107 05:31:11.224310 2853 status_manager.go:156] Starting to sync pod status with apiserver
I0107 05:31:11.224908 2853 kubelet.go:1822] Starting kubelet main sync loop.
I0107 05:31:11.226723 2853 kubelet.go:1839] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0107 05:31:11.252404 2853 volume_manager.go:249] Starting Kubelet Volume Manager
I0107 05:31:11.281797 2853 desired_state_of_world_populator.go:131] Desired state populator starts to run
I0107 05:31:11.313165 2853 server.go:354] Adding debug handlers to kubelet server.
I0107 05:31:11.353774 2853 kubelet.go:1839] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
E0107 05:31:11.389388 2853 cri_stats_provider.go:375] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache.
E0107 05:31:11.393934 2853 kubelet.go:1302] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
E0107 05:31:11.417456 2853 kubelet_network_linux.go:53] Failed to ensure that nat chain KUBE-MARK-DROP exists: error creating chain "KUBE-MARK-DROP": exit status 3: modprobe: FATAL: Module ip_tables not found in directory /lib/modules/4.19.0-xilinx-v2019.1
iptables v1.8.3 (legacy): can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
I0107 05:31:11.489317 2853 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
E0107 05:31:11.503719 2853 kubelet.go:2267] node "pynq0" not found
I0107 05:31:11.621639 2853 kube.go:124] Node controller sync successful
I0107 05:31:11.671046 2853 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
I0107 05:31:11.652173 2853 kubelet.go:1839] skipping pod synchronization - container runtime status check may not have completed yet
E0107 05:31:11.843713 2853 kubelet.go:2267] node "pynq0" not found
I0107 05:31:11.902177 2853 kubelet_node_status.go:72] Attempting to register node pynq0
I0107 05:31:11.992234 2853 kuberuntime_manager.go:961] updating runtime config through cri with podcidr 10.42.1.0/24
FATA[2020-01-07T05:31:12.035386958Z] ipset v7.1: Cannot open session to kernel.
You dont have iptables in the kernel, which os os this?
El mar., 7 ene. 2020 6:43, Joris Bolsens notifications@github.com escribió:
I am running into this problem as well, however I am using k3s agent. Master runs fine (different hardware).
INFO[2020-01-07T05:31:05.082368999Z] Starting k3s agent v1.0.1 (e94a3c60) INFO[2020-01-07T05:31:05.083576112Z] module overlay was already loaded WARN[2020-01-07T05:31:05.113123581Z] failed to start nf_conntrack module WARN[2020-01-07T05:31:05.142718687Z] failed to start br_netfilter module WARN[2020-01-07T05:31:05.143606876Z] failed to write value 1 at /proc/sys/net/ipv6/conf/all/forwarding: open /proc/sys/net/ipv6/conf/all/forwarding: no such file or directory WARN[2020-01-07T05:31:05.144115137Z] failed to write value 1 at /proc/sys/net/bridge/bridge-nf-call-iptables: open /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory WARN[2020-01-07T05:31:05.144604616Z] failed to write value 1 at /proc/sys/net/bridge/bridge-nf-call-ip6tables: open /proc/sys/net/bridge/bridge-nf-call-ip6tables: no such file or directory INFO[2020-01-07T05:31:05.146224571Z] Running load balancer 127.0.0.1:34309 -> [192.168.1.80:6443] INFO[2020-01-07T05:31:06.682951837Z] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log INFO[2020-01-07T05:31:06.683324510Z] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd INFO[2020-01-07T05:31:06.690110701Z] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused" INFO[2020-01-07T05:31:07.693102497Z] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused" INFO[2020-01-07T05:31:08.694949604Z] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused" INFO[2020-01-07T05:31:09.932591988Z] Connecting to proxy url="wss://192.168.1.80:6443/v1-k3s/connect" INFO[2020-01-07T05:31:10.066665951Z] Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/1bb4c288d71e7cc82ab5b8bf4b45cdceeef71d00f5bf24fefe52d1cfc2944d3b/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=pynq0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/systemd/user.slice/user-1000.slice --node-labels= --read-only-port=0 --resolv-conf=/etc/resolv.conf --runtime-cgroups=/systemd/user.slice/user-1000.slice --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key INFO[2020-01-07T05:31:10.095630515Z] Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=pynq0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables W0107 05:31:10.097487 2853 server.go:208] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP. I0107 05:31:10.194755 2853 server.go:406] Version: v1.16.3-k3s.2 W0107 05:31:10.255805 2853 proxier.go:597] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules I0107 05:31:10.351165 2853 flannel.go:91] Determining IP address of default interface I0107 05:31:10.365027 2853 flannel.go:104] Using interface with name eth0 and address 192.168.1.248 E0107 05:31:10.380651 2853 machine.go:288] failed to get cache information for node 0: open /sys/devices/system/cpu/cpu0/cache: no such file or directory INFO[2020-01-07T05:31:10.385836462Z] addresses labels has already been set successfully on node: pynq0 W0107 05:31:10.362271 2853 proxier.go:597] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules I0107 05:31:10.401012 2853 server.go:637] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / I0107 05:31:10.429811 2853 container_manager_linux.go:272] container manager verified user specified cgroup-root exists: [] I0107 05:31:10.426268 2853 kube.go:117] Waiting 10m0s for node controller to sync I0107 05:31:10.426544 2853 kube.go:300] Starting kube subnet manager I0107 05:31:10.431007 2853 container_manager_linux.go:277] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/systemd/user.slice/user-1000.slice SystemCgroupsName: KubeletCgroupsName:/systemd/user.slice/user-1000.slice ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:
Percentage:0.05} GracePeriod:0s MinReclaim: } {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim: }]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} I0107 05:31:10.475979 2853 fake_topology_manager.go:29] [fake topologymanager] NewFakeManager I0107 05:31:10.480498 2853 container_manager_linux.go:312] Creating device plugin manager: true I0107 05:31:10.481199 2853 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{kubelet.sock /var/lib/rancher/k3s/agent/kubelet/device-plugins/ map[] {0 0} {{} [0 0 0]} 0x213dc84 0x585bfb0 0x213e3ac map[] map[] map[] map[] map[] 0x6a99ac0 [] 0x585bfb0} I0107 05:31:10.481942 2853 state_mem.go:36] [cpumanager] initializing new in-memory state store I0107 05:31:10.482927 2853 state_mem.go:84] [cpumanager] updated default cpuset: "" I0107 05:31:10.483770 2853 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]" I0107 05:31:10.484269 2853 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{{0 0} 0x585bfb0 10000000000 0x6acbb80 map[]} I0107 05:31:10.485238 2853 kubelet.go:312] Watching apiserver W0107 05:31:10.618741 2853 proxier.go:597] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules W0107 05:31:10.861013 2853 proxier.go:597] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules I0107 05:31:11.023912 2853 kuberuntime_manager.go:207] Container runtime containerd initialized, version: v1.3.0-k3s.5, apiVersion: v1alpha2 I0107 05:31:11.103158 2853 server.go:1066] Started kubelet W0107 05:31:11.027097 2853 proxier.go:597] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules I0107 05:31:11.150660 2853 server.go:145] Starting to listen on 0.0.0.0:10250 I0107 05:31:11.223421 2853 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer I0107 05:31:11.224310 2853 status_manager.go:156] Starting to sync pod status with apiserver I0107 05:31:11.224908 2853 kubelet.go:1822] Starting kubelet main sync loop. I0107 05:31:11.226723 2853 kubelet.go:1839] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful] I0107 05:31:11.252404 2853 volume_manager.go:249] Starting Kubelet Volume Manager I0107 05:31:11.281797 2853 desired_state_of_world_populator.go:131] Desired state populator starts to run I0107 05:31:11.313165 2853 server.go:354] Adding debug handlers to kubelet server. I0107 05:31:11.353774 2853 kubelet.go:1839] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful] E0107 05:31:11.389388 2853 cri_stats_provider.go:375] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache. E0107 05:31:11.393934 2853 kubelet.go:1302] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem E0107 05:31:11.417456 2853 kubelet_network_linux.go:53] Failed to ensure that nat chain KUBE-MARK-DROP exists: error creating chain "KUBE-MARK-DROP": exit status 3: modprobe: FATAL: Module ip_tables not found in directory /lib/modules/4.19.0-xilinx-v2019.1 iptables v1.8.3 (legacy): can't initialize iptables table `nat': Table does not exist (do you need to insmod?) Perhaps iptables or your kernel needs to be upgraded. I0107 05:31:11.489317 2853 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach E0107 05:31:11.503719 2853 kubelet.go:2267] node "pynq0" not found I0107 05:31:11.621639 2853 kube.go:124] Node controller sync successful I0107 05:31:11.671046 2853 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false I0107 05:31:11.652173 2853 kubelet.go:1839] skipping pod synchronization - container runtime status check may not have completed yet E0107 05:31:11.843713 2853 kubelet.go:2267] node "pynq0" not found I0107 05:31:11.902177 2853 kubelet_node_status.go:72] Attempting to register node pynq0 I0107 05:31:11.992234 2853 kuberuntime_manager.go:961] updating runtime config through cri with podcidr 10.42.1.0/24 FATA[2020-01-07T05:31:12.035386958Z http://10.42.1.0/24FATA%5B2020-01-07T05:31:12.035386958Z] ipset v7.1: Cannot open session to kernel. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/rancher/k3s/issues/959?email_source=notifications&email_token=AAR5IYYUSLBMRWWFFUM2EGLQ4QI7PA5CNFSM4JFFKNKKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEIHYXPQ#issuecomment-571444158, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAR5IY4QHNO4HMRVDGPZ6GDQ4QI7PANCNFSM4JFFKNKA .
@segator I am using petalinux kernel, running on a Pynq-Z1 (Zynq SOC)
@JorisBolsens you need to ensure kernel modules required by k8s are loaded, this is your problem. Module ip_tables not found in directory /lib/modules/4.19.0-xilinx-v2019.1 ipset and others..
I thing this OS is too much light to be able to run kubernetes. check the kubernetes preflight to see which modules you need.
@segator thanks, I will look at adding those.
Got workaround with server start option --disable-network-policy . ipset is used by NetworkPolicy function. The most scenarios it can be disabled.
Resolved in v1.17.2-alpha3+k3s1.
Version: k3s version v0.10.0 (f9888ca3)
Describe the bug k3s server crashes on startup with "ipset v7.1: Cannot open session to kernel."
To Reproduce run the install.sh script and run
k3s server
Expected behavior k3s should be starting and not crash Actual behavior k3s crashes Additional context The device is a coral dev board (aarch64) with 1GB RAM and 4x 1.5GHz Cores OS: Mendel GNU/Linux 3 (Chef) Linux Kernel: Linux xenial-shrimp 4.9.51-imx terminal output:
syslog:
I tried to replace ipset and iptables with different versions, however this results in the same error.