k3s-io / k3s

Lightweight Kubernetes
https://k3s.io
Apache License 2.0
27.77k stars 2.33k forks source link

failed to open file hugetlb.64kB.limit_in_bytes on armbian #474

Closed plutoid closed 3 years ago

plutoid commented 5 years ago

Describe the bug when I start service with 'k3s start' on armbian, it complained with issue like that cannot open open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes , but on OS filesystem the file /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64KB.limit_in_bytes does exist, the difference is that file name including should be KB not kB, not sure which side is wrong here.

To Reproduce Steps to reproduce the behavior: start with: k3s server

Expected behavior hope check if this is issue on k3s or kubenetes

Screenshots

F0510 04:55:28.343119   17099 kubelet.go:1327] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
goroutine 4272 [running]:
github.com/rancher/k3s/vendor/k8s.io/klog.stacks(0x4000303100, 0x4002346500, 0x1ce, 0x4d0)
        /go/src/github.com/rancher/k3s/vendor/k8s.io/klog/klog.go:828 +0xac
github.com/rancher/k3s/vendor/k8s.io/klog.(*loggingT).output(0x5790d60, 0x4000000003, 0x40012279d0, 0x54f5212, 0xa, 0x51a, 0x0)
        /go/src/github.com/rancher/k3s/vendor/k8s.io/klog/klog.go:779 +0x2d8
github.com/rancher/k3s/vendor/k8s.io/klog.(*loggingT).printf(0x5790d60, 0x4000000003, 0x2dc8077, 0x23, 0x4003f09d08, 0x1, 0x1)
        /go/src/github.com/rancher/k3s/vendor/k8s.io/klog/klog.go:678 +0x114
github.com/rancher/k3s/vendor/k8s.io/klog.Fatalf(...)
        /go/src/github.com/rancher/k3s/vendor/k8s.io/klog/klog.go:1207
github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet.(*Kubelet).initializeRuntimeDependentModules(0x4001a30d00)
        /go/src/github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1306 +0x27c
sync.(*Once).Do(0x4001a31410, 0x4003d9de58)
        /usr/local/go/src/sync/once.go:44 +0xc4
github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet.(*Kubelet).updateRuntimeUp(0x4001a30d00)
        /go/src/github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:2119 +0x330
github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0x4001b1a180)
        /go/src/github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x50
github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001b1a180, 0x12a05f200, 0x0, 0x1, 0x40000b21e0)
        /go/src/github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xb8
github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0x4001b1a180, 0x12a05f200, 0x40000b21e0)
        /go/src/github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x48
created by github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet.(*Kubelet).Run
        /go/src/github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1350 +0x10c

Additional context OS: armbian hardware: Amlogic S905D : image version: Linux aml 5.0.2-aml-s905 #5.77 SMP PREEMPT Mon Apr 1 17:41:33 MSK 2019 aarch64 GNU/Linux image file: image 5.77- s905: https://yadi.sk/d/pHxaRAs-tZiei

k3s version: K3s v0.5.0 /v0.5.0-rc4 ..

similar report on https://github.com/kubernetes/kubernetes/issues/77169

garyschulte commented 5 years ago

Getting the same issue with k3s-arm64 0.6.1 on armbian 5.88

MarkSchmitt commented 5 years ago

with k3s v0.9.0-rc2 it seems to be fixed, according to upstream they backported it into k8s 1.15 :) I successfully was able to join a x86_64 k3s (running k8s 1.14) cluster from my rock64 arm64 running kernel 5.2.7 and spawn a busybox on it. have't done more testing yet.

brandond commented 3 years ago

Closing due to age.