Closed applike-ss closed 2 years ago
Hi @applike-ss , thanks for opening this issue! Now those logs look super weird O.o
�[37mTRAC�[0m[0001] NodeWaitForLogMessage: Node 'k3d-k3s-default-server-0' waiting for log message 'k3s is up and running' since '2021-11-30 11:31:51 +0000 UTC'
�[36mINFO�[0m[0003] Deleting cluster 'k3s-default'
Where the hell does this jump from "creating" to "deleting" come from? I cannot see a way this could happen without any error/fatal log message :thinking: Are there any log lines missing from your output?
I'm on the exact same Kernel, Docker Client/Server, k3d/K3s versions on Ubuntu (PopOS), without any problems :thinking:
@iwilltry42 I submitted the complete log, nothings stripped out or so. Yeah i had a "not starting up" issue with the k3d version 4 as well and since i wanted to try some stuff out on a temporary k8s cluster, i though i'd try the update and check if that works. Now with it not working (and me getting more devops work recently), it would be nice to get it working somehow.
Is there anything i can try out or information that i could forward to you?
So I'm really lost here. There's no indication of failure, k3d doesn't log anything about its intention to rollback, there's no k3d cluster delete
command issued and there's no final log line saying that the cluster was fully removed.
Something's really fishy here.
Could be connected to the custom kernel you're using, but even then.. where are the logs? :man_shrugging:
Can you maybe try again with --no-rollback
appended to the cluster create command and then check docker ps -a
for the server-0 container and get the logs from it?
i just checked again and my output redirection was broken. I updated the logs in the first post. SORRY!!!
Ah now that makes some more sense :grin:
Then please do k3d cluster create --trace --no-rollback
and once it finished (and failed) fetch the logs from the failed container: docker logs k3d-k3s-default-server-0
and paste them here for inspection :+1:
These are the logs:
time="2021-12-03T09:32:13.999417438Z" level=info msg="Starting k3s v1.21.5+k3s2 (724ef700)"
time="2021-12-03T09:32:14.025676132Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
time="2021-12-03T09:32:14.025690318Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
time="2021-12-03T09:32:14.034020999Z" level=info msg="Database tables and indexes are up to date"
time="2021-12-03T09:32:14.034885000Z" level=info msg="Kine listening on unix://kine.sock"
time="2021-12-03T09:32:14.043549140Z" level=info msg="certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:14 +0000 UTC"
time="2021-12-03T09:32:14.043896877Z" level=info msg="certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:14 +0000 UTC"
time="2021-12-03T09:32:14.044228272Z" level=info msg="certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:14 +0000 UTC"
time="2021-12-03T09:32:14.044602268Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-client-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:14 +0000 UTC"
time="2021-12-03T09:32:14.044935176Z" level=info msg="certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:14 +0000 UTC"
time="2021-12-03T09:32:14.045269587Z" level=info msg="certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:14 +0000 UTC"
time="2021-12-03T09:32:14.045621912Z" level=info msg="certificate CN=k3s-cloud-controller-manager signed by CN=k3s-client-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:14 +0000 UTC"
time="2021-12-03T09:32:14.046185926Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:14 +0000 UTC"
time="2021-12-03T09:32:14.046737377Z" level=info msg="certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:14 +0000 UTC"
time="2021-12-03T09:32:14.047281534Z" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:14 +0000 UTC"
time="2021-12-03T09:32:14.047590737Z" level=info msg="certificate CN=etcd-client signed by CN=etcd-server-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:14 +0000 UTC"
time="2021-12-03T09:32:14.048114806Z" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:14 +0000 UTC"
time="2021-12-03T09:32:14.343093231Z" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:14 +0000 UTC"
time="2021-12-03T09:32:14.343426601Z" level=info msg="Active TLS secret (ver=) (count 10): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.26.0.3:172.26.0.3 listener.cattle.io/cn-k3d-k3s-default-server-0:k3d-k3s-default-server-0 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=03F7DE1B39B8E11459A58B40E9997ABEBC255F34]"
time="2021-12-03T09:32:14.347475692Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
I1203 09:32:14.348109 7 server.go:656] external host was not specified, using 172.26.0.3
I1203 09:32:14.348248 7 server.go:195] Version: v1.21.5+k3s2
I1203 09:32:14.572112 7 shared_informer.go:240] Waiting for caches to sync for node_authorizer
I1203 09:32:14.572778 7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1203 09:32:14.572786 7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1203 09:32:14.573418 7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1203 09:32:14.573424 7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1203 09:32:14.585721 7 instance.go:283] Using reconciler: lease
I1203 09:32:14.609202 7 rest.go:130] the default service ipfamily for this cluster is: IPv4
W1203 09:32:14.893117 7 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:14.900177 7 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:14.902598 7 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:14.906576 7 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:14.908215 7 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:14.912147 7 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources.
W1203 09:32:14.912154 7 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources.
I1203 09:32:14.918947 7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1203 09:32:14.918956 7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
time="2021-12-03T09:32:14.925951867Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0"
time="2021-12-03T09:32:14.925997994Z" level=info msg="Waiting for API server to become available"
time="2021-12-03T09:32:14.926308550Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
time="2021-12-03T09:32:14.926620298Z" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --node-status-update-frequency=1m0s --port=0 --profiling=false"
time="2021-12-03T09:32:14.927054888Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
time="2021-12-03T09:32:14.927078102Z" level=info msg="To join node to cluster: k3s agent -s https://172.26.0.3:6443 -t ${NODE_TOKEN}"
time="2021-12-03T09:32:14.927618642Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"
time="2021-12-03T09:32:14.927685268Z" level=info msg="Run: k3s kubectl"
time="2021-12-03T09:32:14.941807679Z" level=info msg="Cluster-Http-Server 2021/12/03 09:32:14 http: TLS handshake error from 127.0.0.1:57096: remote error: tls: bad certificate"
time="2021-12-03T09:32:14.944417153Z" level=info msg="Cluster-Http-Server 2021/12/03 09:32:14 http: TLS handshake error from 127.0.0.1:57102: remote error: tls: bad certificate"
time="2021-12-03T09:32:14.949799140Z" level=info msg="certificate CN=k3d-k3s-default-server-0 signed by CN=k3s-server-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:14 +0000 UTC"
time="2021-12-03T09:32:14.951117538Z" level=info msg="certificate CN=system:node:k3d-k3s-default-server-0,O=system:nodes signed by CN=k3s-client-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:14 +0000 UTC"
time="2021-12-03T09:32:14.979282639Z" level=info msg="Module overlay was already loaded"
time="2021-12-03T09:32:14.979297517Z" level=info msg="Module nf_conntrack was already loaded"
time="2021-12-03T09:32:14.979305472Z" level=info msg="Module br_netfilter was already loaded"
time="2021-12-03T09:32:14.979314339Z" level=info msg="Module iptable_nat was already loaded"
time="2021-12-03T09:32:14.982543603Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600"
time="2021-12-03T09:32:14.982597886Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_max' to 786432"
time="2021-12-03T09:32:14.982610750Z" level=error msg="Failed to set sysctl: open /proc/sys/net/netfilter/nf_conntrack_max: permission denied"
time="2021-12-03T09:32:14.982630908Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400"
time="2021-12-03T09:32:14.983145760Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
time="2021-12-03T09:32:14.983234607Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
time="2021-12-03T09:32:15.984769232Z" level=info msg="Containerd is now running"
time="2021-12-03T09:32:15.992717953Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
time="2021-12-03T09:32:15.994211702Z" level=info msg="Handling backend connection request [k3d-k3s-default-server-0]"
time="2021-12-03T09:32:15.994503633Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"
time="2021-12-03T09:32:15.994543939Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --runtime-cgroups=/k3s --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
time="2021-12-03T09:32:15.994919868Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Flag --cni-bin-dir has been deprecated, will be removed along with dockershim.
Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
W1203 09:32:15.995128 7 server.go:220] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
I1203 09:32:15.995421 7 server.go:436] "Kubelet version" kubeletVersion="v1.21.5+k3s2"
W1203 09:32:15.995424 7 proxier.go:653] Failed to read file /lib/modules/5.15.5-xanmod1-tt/modules.builtin with error open /lib/modules/5.15.5-xanmod1-tt/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:15.995869 7 proxier.go:663] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:15.996118 7 proxier.go:663] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:15.996366 7 proxier.go:663] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:15.996617 7 proxier.go:663] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:15.996863 7 proxier.go:663] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
E1203 09:32:16.000413 7 node.go:161] Failed to retrieve node info: nodes "k3d-k3s-default-server-0" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
I1203 09:32:16.009533 7 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt
E1203 09:32:16.009537 7 server.go:288] "Failed to run kubelet" err="failed to run Kubelet: mountpoint for cpu not found"
time="2021-12-03T09:32:16.550998125Z" level=info msg="Starting k3s v1.21.5+k3s2 (724ef700)"
time="2021-12-03T09:32:16.551165060Z" level=info msg="Cluster bootstrap already complete"
time="2021-12-03T09:32:16.558429649Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:16 +0000 UTC"
time="2021-12-03T09:32:16.559220382Z" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:16 +0000 UTC"
time="2021-12-03T09:32:16.559939328Z" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:16 +0000 UTC"
time="2021-12-03T09:32:16.560642245Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
time="2021-12-03T09:32:16.560669717Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
time="2021-12-03T09:32:16.560735240Z" level=info msg="Database tables and indexes are up to date"
time="2021-12-03T09:32:16.560796957Z" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:16 +0000 UTC"
time="2021-12-03T09:32:16.560947691Z" level=info msg="Active TLS secret (ver=) (count 11): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.26.0.2:172.26.0.2 listener.cattle.io/cn-172.26.0.3:172.26.0.3 listener.cattle.io/cn-k3d-k3s-default-server-0:k3d-k3s-default-server-0 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=BD9E13ED38C7AED3A9E1D0C6473F6E14913CEDD7]"
time="2021-12-03T09:32:16.561514731Z" level=info msg="Kine listening on unix://kine.sock"
time="2021-12-03T09:32:16.561602607Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
I1203 09:32:16.562459 7 server.go:656] external host was not specified, using 172.26.0.2
I1203 09:32:16.562639 7 server.go:195] Version: v1.21.5+k3s2
I1203 09:32:16.564804 7 shared_informer.go:240] Waiting for caches to sync for node_authorizer
I1203 09:32:16.565451 7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1203 09:32:16.565457 7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1203 09:32:16.566118 7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1203 09:32:16.566128 7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1203 09:32:16.578855 7 instance.go:283] Using reconciler: lease
I1203 09:32:16.598197 7 rest.go:130] the default service ipfamily for this cluster is: IPv4
W1203 09:32:16.872350 7 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:16.879085 7 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:16.881441 7 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:16.885385 7 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:16.887035 7 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:16.890216 7 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources.
W1203 09:32:16.890223 7 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources.
I1203 09:32:16.899823 7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1203 09:32:16.899842 7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
time="2021-12-03T09:32:16.908305283Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0"
time="2021-12-03T09:32:16.908360077Z" level=info msg="Waiting for API server to become available"
time="2021-12-03T09:32:16.908642660Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
time="2021-12-03T09:32:16.908958586Z" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --node-status-update-frequency=1m0s --port=0 --profiling=false"
time="2021-12-03T09:32:16.909430206Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
time="2021-12-03T09:32:16.909442589Z" level=info msg="To join node to cluster: k3s agent -s https://172.26.0.2:6443 -t ${NODE_TOKEN}"
time="2021-12-03T09:32:16.910002546Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"
time="2021-12-03T09:32:16.910031941Z" level=info msg="Run: k3s kubectl"
time="2021-12-03T09:32:16.919786830Z" level=info msg="Cluster-Http-Server 2021/12/03 09:32:16 http: TLS handshake error from 127.0.0.1:57134: remote error: tls: bad certificate"
time="2021-12-03T09:32:16.922155621Z" level=info msg="Cluster-Http-Server 2021/12/03 09:32:16 http: TLS handshake error from 127.0.0.1:57140: remote error: tls: bad certificate"
time="2021-12-03T09:32:16.927353630Z" level=info msg="certificate CN=k3d-k3s-default-server-0 signed by CN=k3s-server-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:16 +0000 UTC"
time="2021-12-03T09:32:16.928607737Z" level=info msg="certificate CN=system:node:k3d-k3s-default-server-0,O=system:nodes signed by CN=k3s-client-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:16 +0000 UTC"
time="2021-12-03T09:32:16.947817388Z" level=info msg="Module overlay was already loaded"
time="2021-12-03T09:32:16.947831595Z" level=info msg="Module nf_conntrack was already loaded"
time="2021-12-03T09:32:16.947840221Z" level=info msg="Module br_netfilter was already loaded"
time="2021-12-03T09:32:16.947848076Z" level=info msg="Module iptable_nat was already loaded"
time="2021-12-03T09:32:16.950719895Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_max' to 786432"
time="2021-12-03T09:32:16.950734022Z" level=error msg="Failed to set sysctl: open /proc/sys/net/netfilter/nf_conntrack_max: permission denied"
time="2021-12-03T09:32:16.950748369Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400"
time="2021-12-03T09:32:16.950771934Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600"
time="2021-12-03T09:32:16.951266036Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
time="2021-12-03T09:32:16.951365293Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
time="2021-12-03T09:32:17.952660126Z" level=info msg="Containerd is now running"
time="2021-12-03T09:32:17.958507621Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
time="2021-12-03T09:32:17.959890541Z" level=info msg="Handling backend connection request [k3d-k3s-default-server-0]"
time="2021-12-03T09:32:17.960159638Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"
time="2021-12-03T09:32:17.960197921Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --runtime-cgroups=/k3s --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
time="2021-12-03T09:32:17.960577858Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Flag --cni-bin-dir has been deprecated, will be removed along with dockershim.
Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
W1203 09:32:17.960784 7 server.go:220] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
I1203 09:32:17.961100 7 server.go:436] "Kubelet version" kubeletVersion="v1.21.5+k3s2"
W1203 09:32:17.961104 7 proxier.go:653] Failed to read file /lib/modules/5.15.5-xanmod1-tt/modules.builtin with error open /lib/modules/5.15.5-xanmod1-tt/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:17.961712 7 proxier.go:663] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:17.962097 7 proxier.go:663] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:17.962389 7 proxier.go:663] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:17.962662 7 proxier.go:663] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:17.962948 7 proxier.go:663] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
E1203 09:32:17.967085 7 node.go:161] Failed to retrieve node info: nodes "k3d-k3s-default-server-0" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
I1203 09:32:17.972283 7 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt
E1203 09:32:17.972310 7 server.go:288] "Failed to run kubelet" err="failed to run Kubelet: mountpoint for cpu not found"
time="2021-12-03T09:32:18.573446395Z" level=info msg="Starting k3s v1.21.5+k3s2 (724ef700)"
time="2021-12-03T09:32:18.573579356Z" level=info msg="Cluster bootstrap already complete"
time="2021-12-03T09:32:18.581904998Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
time="2021-12-03T09:32:18.581920186Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
time="2021-12-03T09:32:18.581970952Z" level=info msg="Database tables and indexes are up to date"
time="2021-12-03T09:32:18.582678868Z" level=info msg="Kine listening on unix://kine.sock"
time="2021-12-03T09:32:18.582746085Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
I1203 09:32:18.583376 6 server.go:656] external host was not specified, using 172.26.0.2
I1203 09:32:18.583505 6 server.go:195] Version: v1.21.5+k3s2
I1203 09:32:18.585784 6 shared_informer.go:240] Waiting for caches to sync for node_authorizer
I1203 09:32:18.586453 6 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1203 09:32:18.586467 6 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1203 09:32:18.587101 6 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1203 09:32:18.587107 6 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1203 09:32:18.601636 6 instance.go:283] Using reconciler: lease
I1203 09:32:18.629524 6 rest.go:130] the default service ipfamily for this cluster is: IPv4
W1203 09:32:18.888002 6 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:18.894811 6 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:18.897218 6 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:18.901211 6 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:18.902878 6 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:18.906051 6 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources.
W1203 09:32:18.906058 6 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources.
I1203 09:32:18.912941 6 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1203 09:32:18.912951 6 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
time="2021-12-03T09:32:18.925635475Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0"
time="2021-12-03T09:32:18.925692643Z" level=info msg="Waiting for API server to become available"
time="2021-12-03T09:32:18.926014160Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
time="2021-12-03T09:32:18.926355564Z" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --node-status-update-frequency=1m0s --port=0 --profiling=false"
time="2021-12-03T09:32:18.926863693Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
time="2021-12-03T09:32:18.926875836Z" level=info msg="To join node to cluster: k3s agent -s https://172.26.0.2:6443 -t ${NODE_TOKEN}"
time="2021-12-03T09:32:18.927459337Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"
time="2021-12-03T09:32:18.927500264Z" level=info msg="Run: k3s kubectl"
time="2021-12-03T09:32:18.937576830Z" level=info msg="Cluster-Http-Server 2021/12/03 09:32:18 http: TLS handshake error from 127.0.0.1:57166: remote error: tls: bad certificate"
time="2021-12-03T09:32:18.939880437Z" level=info msg="Cluster-Http-Server 2021/12/03 09:32:18 http: TLS handshake error from 127.0.0.1:57174: remote error: tls: bad certificate"
time="2021-12-03T09:32:18.944735099Z" level=info msg="certificate CN=k3d-k3s-default-server-0 signed by CN=k3s-server-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:18 +0000 UTC"
time="2021-12-03T09:32:18.945966102Z" level=info msg="certificate CN=system:node:k3d-k3s-default-server-0,O=system:nodes signed by CN=k3s-client-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:18 +0000 UTC"
time="2021-12-03T09:32:18.965182095Z" level=info msg="Module overlay was already loaded"
time="2021-12-03T09:32:18.965197294Z" level=info msg="Module nf_conntrack was already loaded"
time="2021-12-03T09:32:18.965209216Z" level=info msg="Module br_netfilter was already loaded"
time="2021-12-03T09:32:18.965219285Z" level=info msg="Module iptable_nat was already loaded"
time="2021-12-03T09:32:18.968103578Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400"
time="2021-12-03T09:32:18.968141830Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600"
time="2021-12-03T09:32:18.968192786Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_max' to 786432"
time="2021-12-03T09:32:18.968204448Z" level=error msg="Failed to set sysctl: open /proc/sys/net/netfilter/nf_conntrack_max: permission denied"
time="2021-12-03T09:32:18.968770306Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
time="2021-12-03T09:32:18.968862410Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
time="2021-12-03T09:32:19.970762855Z" level=info msg="Containerd is now running"
time="2021-12-03T09:32:19.977056663Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
time="2021-12-03T09:32:19.978778031Z" level=info msg="Handling backend connection request [k3d-k3s-default-server-0]"
time="2021-12-03T09:32:19.979052499Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"
time="2021-12-03T09:32:19.979096843Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --runtime-cgroups=/k3s --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
time="2021-12-03T09:32:19.979455159Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Flag --cni-bin-dir has been deprecated, will be removed along with dockershim.
Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
W1203 09:32:19.979641 6 server.go:220] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
I1203 09:32:19.979916 6 server.go:436] "Kubelet version" kubeletVersion="v1.21.5+k3s2"
W1203 09:32:19.980090 6 proxier.go:653] Failed to read file /lib/modules/5.15.5-xanmod1-tt/modules.builtin with error open /lib/modules/5.15.5-xanmod1-tt/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:19.980497 6 proxier.go:663] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:19.980720 6 proxier.go:663] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:19.980932 6 proxier.go:663] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:19.981136 6 proxier.go:663] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:19.981350 6 proxier.go:663] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
E1203 09:32:19.984732 6 node.go:161] Failed to retrieve node info: nodes "k3d-k3s-default-server-0" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
I1203 09:32:19.990866 6 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt
E1203 09:32:19.990917 6 server.go:288] "Failed to run kubelet" err="failed to run Kubelet: mountpoint for cpu not found"
time="2021-12-03T09:32:20.792536597Z" level=info msg="Starting k3s v1.21.5+k3s2 (724ef700)"
time="2021-12-03T09:32:20.792689846Z" level=info msg="Cluster bootstrap already complete"
time="2021-12-03T09:32:20.802483799Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
time="2021-12-03T09:32:20.802500650Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
time="2021-12-03T09:32:20.802552468Z" level=info msg="Database tables and indexes are up to date"
time="2021-12-03T09:32:20.803246067Z" level=info msg="Kine listening on unix://kine.sock"
time="2021-12-03T09:32:20.803313324Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
I1203 09:32:20.804068 7 server.go:656] external host was not specified, using 172.26.0.2
I1203 09:32:20.804206 7 server.go:195] Version: v1.21.5+k3s2
I1203 09:32:20.806393 7 shared_informer.go:240] Waiting for caches to sync for node_authorizer
I1203 09:32:20.807175 7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1203 09:32:20.807182 7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1203 09:32:20.807844 7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1203 09:32:20.807850 7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1203 09:32:20.821644 7 instance.go:283] Using reconciler: lease
I1203 09:32:20.846286 7 rest.go:130] the default service ipfamily for this cluster is: IPv4
W1203 09:32:21.121182 7 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:21.127971 7 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:21.130317 7 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:21.134255 7 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:21.135909 7 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
W1203 09:32:21.139070 7 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources.
W1203 09:32:21.139076 7 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources.
I1203 09:32:21.145928 7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1203 09:32:21.145935 7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
time="2021-12-03T09:32:21.153049926Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0"
time="2021-12-03T09:32:21.153095692Z" level=info msg="Waiting for API server to become available"
time="2021-12-03T09:32:21.155299762Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
time="2021-12-03T09:32:21.156066609Z" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --node-status-update-frequency=1m0s --port=0 --profiling=false"
time="2021-12-03T09:32:21.157583451Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
time="2021-12-03T09:32:21.157618056Z" level=info msg="To join node to cluster: k3s agent -s https://172.26.0.2:6443 -t ${NODE_TOKEN}"
time="2021-12-03T09:32:21.158814133Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"
time="2021-12-03T09:32:21.158930533Z" level=info msg="Run: k3s kubectl"
time="2021-12-03T09:32:21.169507143Z" level=info msg="Cluster-Http-Server 2021/12/03 09:32:21 http: TLS handshake error from 127.0.0.1:57200: remote error: tls: bad certificate"
time="2021-12-03T09:32:21.172175298Z" level=info msg="Cluster-Http-Server 2021/12/03 09:32:21 http: TLS handshake error from 127.0.0.1:57206: remote error: tls: bad certificate"
time="2021-12-03T09:32:21.178178907Z" level=info msg="certificate CN=k3d-k3s-default-server-0 signed by CN=k3s-server-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:21 +0000 UTC"
time="2021-12-03T09:32:21.179659001Z" level=info msg="certificate CN=system:node:k3d-k3s-default-server-0,O=system:nodes signed by CN=k3s-client-ca@1638523934: notBefore=2021-12-03 09:32:14 +0000 UTC notAfter=2022-12-03 09:32:21 +0000 UTC"
time="2021-12-03T09:32:21.199093476Z" level=info msg="Module overlay was already loaded"
time="2021-12-03T09:32:21.199121289Z" level=info msg="Module nf_conntrack was already loaded"
time="2021-12-03T09:32:21.199136798Z" level=info msg="Module br_netfilter was already loaded"
time="2021-12-03T09:32:21.199151516Z" level=info msg="Module iptable_nat was already loaded"
time="2021-12-03T09:32:21.202299115Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_max' to 786432"
time="2021-12-03T09:32:21.202323562Z" level=error msg="Failed to set sysctl: open /proc/sys/net/netfilter/nf_conntrack_max: permission denied"
time="2021-12-03T09:32:21.202354720Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400"
time="2021-12-03T09:32:21.202382523Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600"
time="2021-12-03T09:32:21.202901542Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
time="2021-12-03T09:32:21.202995810Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
time="2021-12-03T09:32:22.204432080Z" level=info msg="Containerd is now running"
time="2021-12-03T09:32:22.211662745Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
time="2021-12-03T09:32:22.212862770Z" level=info msg="Handling backend connection request [k3d-k3s-default-server-0]"
time="2021-12-03T09:32:22.213158097Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"
time="2021-12-03T09:32:22.213205066Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --runtime-cgroups=/k3s --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
time="2021-12-03T09:32:22.213588439Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Flag --cni-bin-dir has been deprecated, will be removed along with dockershim.
Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
W1203 09:32:22.213813 7 server.go:220] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
I1203 09:32:22.214152 7 server.go:436] "Kubelet version" kubeletVersion="v1.21.5+k3s2"
W1203 09:32:22.214260 7 proxier.go:653] Failed to read file /lib/modules/5.15.5-xanmod1-tt/modules.builtin with error open /lib/modules/5.15.5-xanmod1-tt/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:22.214680 7 proxier.go:663] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:22.214892 7 proxier.go:663] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:22.215103 7 proxier.go:663] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:22.215278 7 proxier.go:663] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1203 09:32:22.215441 7 proxier.go:663] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
E1203 09:32:22.219852 7 node.go:161] Failed to retrieve node info: nodes "k3d-k3s-default-server-0" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
I1203 09:32:22.227749 7 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt
E1203 09:32:22.227801 7 server.go:288] "Failed to run kubelet" err="failed to run Kubelet: mountpoint for cpu not found"
Ah, well there we go: E1203 09:32:22.227801 7 server.go:288] "Failed to run kubelet" err="failed to run Kubelet: mountpoint for cpu not found"
So quick googling of this error brought me from SO to this potential workaround, which you may want to try :+1:
EDIT: Actually I could've seen this from your docker info
output as below, sorry!
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
WARNING: No cpu shares support
EDIT 2: In your case it's probably related to this: https://github.com/xanmod/linux/issues/3
Thanks for your help! I wonder if that is still up-to-date because xanmod has moved from cacule to the task type scheduler. Will try it anyway and let you know :)
I booted with swapaccount=1, but still get the three warnings that you mentioned from docker info - and k3d still doesn't start (obviously for the same reasons).
cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-5.15.5-xanmod1-tt root=UUID=b5bc341e-7e7f-490e-982f-d7773f56cd0e ro nosplash elevator=noop swapaccount=1 DEEPIN_GFXMODE=
EDIT: The workaround mentioned first also didn't work
I'm also affected by this.
time="2021-12-03T22:27:59.196695835Z" level=warning msg="Failed to find cpuset cgroup, you may need to add \"cgroup_enable=cpuset\" to your linux cmdline (/b
oot/cmdline.txt on a Raspberry Pi)"
time="2021-12-03T22:27:59.196725418Z" level=error msg="Failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your lin
ux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
time="2021-12-03T22:27:59.196767460Z" level=fatal msg="failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your lin
ux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
Looks like I picked the wrong day to update docker...
The solution for me was to switch to the linux-xanmod kernel package which uses CFS, so i don't have cacule/tt scheduler now - but k3d is working nicely so far :-)
What did you do
How was the cluster created?
k3d cluster create
(output appended was generated with--trace
)What did you do afterwards?
What did you expect to happen
Cluster should come up and be usable
Screenshots or terminal output
Which OS & Architecture
Which version of
k3d
Which version of docker
output of
docker version
:and
docker info
: