k3s-io / k3s

Lightweight Kubernetes
https://k3s.io
Apache License 2.0
27.83k stars 2.33k forks source link

K3s Server Error When Start up #6770

Closed joesan closed 1 year ago

joesan commented 1 year ago

Environmental Info: K3s Version:

joesan@m1:/etc/systemd/system$ k3s -version
k3s version v1.26.0+k3s1 (fae88176)
go version go1.19.4

Node(s) CPU architecture, OS, and Version:

joesan@m1:/etc/systemd/system$ uname -a
Linux m1 5.15.0-1012-raspi #14-Ubuntu SMP PREEMPT Fri Jun 24 13:10:28 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux

Cluster Configuration:

Just one master node

Describe the bug:

When I ran:

$sudo k3s server

I ran into the following error:

joesan@m1:/etc/systemd/system$ sudo k3s server
INFO[0000] Starting k3s v1.26.0+k3s1 (fae88176)         
INFO[0000] Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s 
INFO[0000] Configuring database table schema and indexes, this may take a moment... 
INFO[0000] Database tables and indexes are up to date   
INFO[0000] Kine available at unix://kine.sock           
INFO[0000] Reconciling bootstrap data between datastore and disk 
INFO[0000] Tunnel server egress proxy mode: agent       
INFO[0000] Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --etcd-servers=unix://kine.sock --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key 
INFO[0000] Tunnel server egress proxy waiting for runtime core to become available 
INFO[0000] Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259 
INFO[0000] Waiting for API server to become available   
W0118 18:44:36.226379    1119 feature_gate.go:241] Setting GA feature gate JobTrackingWithFinalizers=true. It will be removed in a future release.
INFO[0000] Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true 
INFO[0000] Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false 
INFO[0000] Server node token is available at /var/lib/rancher/k3s/server/token 
INFO[0000] To join server node to cluster: k3s server -s https://192.168.1.100:6443 -t ${SERVER_NODE_TOKEN} 
INFO[0000] Agent node token is available at /var/lib/rancher/k3s/server/agent-token 
INFO[0000] To join agent node to cluster: k3s agent -s https://192.168.1.100:6443 -t ${AGENT_NODE_TOKEN} 
I0118 18:44:36.250062    1119 server.go:569] external host was not specified, using 192.168.1.100
I0118 18:44:36.254047    1119 server.go:171] Version: v1.26.0+k3s1
I0118 18:44:36.254238    1119 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
INFO[0000] Wrote kubeconfig /etc/rancher/k3s/k3s.yaml   
INFO[0000] Run: k3s kubectl                             
INFO[0001] certificate CN=m1 signed by CN=k3s-server-ca@1673987047: notBefore=2023-01-17 20:24:07 +0000 UTC notAfter=2024-01-18 18:44:36 +0000 UTC 
INFO[0001] certificate CN=system:node:m1,O=system:nodes signed by CN=k3s-client-ca@1673987047: notBefore=2023-01-17 20:24:07 +0000 UTC notAfter=2024-01-18 18:44:36 +0000 UTC 
I0118 18:44:36.636727    1119 shared_informer.go:273] Waiting for caches to sync for node_authorizer
I0118 18:44:36.669020    1119 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0118 18:44:36.669435    1119 plugins.go:161] Loaded 12 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
INFO[0001] Module overlay was already loaded            
W0118 18:44:37.051030    1119 genericapiserver.go:660] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
I0118 18:44:37.057731    1119 instance.go:277] Using reconciler: lease
W0118 18:44:37.094882    1119 sysinfo.go:203] Nodes topology is not available, providing CPU topology
INFO[0001] Set sysctl 'net/ipv4/conf/all/forwarding' to 1 
INFO[0001] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 
INFO[0001] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 
INFO[0001] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 
INFO[0001] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log 
INFO[0001] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd 
I0118 18:44:37.864467    1119 instance.go:621] API group "internal.apiserver.k8s.io" is not enabled, skipping.
INFO[0002] Waiting for containerd startup: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory" 
I0118 18:44:38.409090    1119 instance.go:621] API group "resource.k8s.io" is not enabled, skipping.
W0118 18:44:39.076544    1119 genericapiserver.go:660] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
W0118 18:44:39.076650    1119 genericapiserver.go:660] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
W0118 18:44:39.100818    1119 genericapiserver.go:660] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
W0118 18:44:39.130355    1119 genericapiserver.go:660] Skipping API autoscaling/v2beta1 because it has no resources.
W0118 18:44:39.130482    1119 genericapiserver.go:660] Skipping API autoscaling/v2beta2 because it has no resources.
W0118 18:44:39.149049    1119 genericapiserver.go:660] Skipping API batch/v1beta1 because it has no resources.
INFO[0003] Waiting for containerd startup: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory" 
W0118 18:44:39.164096    1119 genericapiserver.go:660] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
W0118 18:44:39.178926    1119 genericapiserver.go:660] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
W0118 18:44:39.179341    1119 genericapiserver.go:660] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
W0118 18:44:39.214517    1119 genericapiserver.go:660] Skipping API networking.k8s.io/v1beta1 because it has no resources.
W0118 18:44:39.214635    1119 genericapiserver.go:660] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
W0118 18:44:39.227981    1119 genericapiserver.go:660] Skipping API node.k8s.io/v1beta1 because it has no resources.
W0118 18:44:39.228094    1119 genericapiserver.go:660] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0118 18:44:39.228430    1119 genericapiserver.go:660] Skipping API policy/v1beta1 because it has no resources.
W0118 18:44:39.302321    1119 genericapiserver.go:660] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
W0118 18:44:39.302479    1119 genericapiserver.go:660] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0118 18:44:39.326032    1119 genericapiserver.go:660] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
W0118 18:44:39.326157    1119 genericapiserver.go:660] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0118 18:44:39.359233    1119 genericapiserver.go:660] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0118 18:44:39.387201    1119 genericapiserver.go:660] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
W0118 18:44:39.387314    1119 genericapiserver.go:660] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
W0118 18:44:39.415050    1119 genericapiserver.go:660] Skipping API apps/v1beta2 because it has no resources.
W0118 18:44:39.415164    1119 genericapiserver.go:660] Skipping API apps/v1beta1 because it has no resources.
W0118 18:44:39.428167    1119 genericapiserver.go:660] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
W0118 18:44:39.428312    1119 genericapiserver.go:660] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
W0118 18:44:39.438707    1119 genericapiserver.go:660] Skipping API events.k8s.io/v1beta1 because it has no resources.
W0118 18:44:39.510944    1119 genericapiserver.go:660] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
INFO[0004] Containerd is now running                    
INFO[0004] Connecting to proxy                           url="wss://127.0.0.1:6443/v1-k3s/connect"
INFO[0004] Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=m1 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels= --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key 
INFO[0004] Handling backend connection request [m1]     
INFO[0004] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error 
INFO[0005] Tunnel server egress proxy waiting for runtime core to become available 
I0118 18:44:44.387515    1119 secure_serving.go:210] Serving securely on 127.0.0.1:6444
I0118 18:44:44.391321    1119 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0118 18:44:44.391513    1119 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt"
I0118 18:44:44.392051    1119 controller.go:80] Starting OpenAPI V3 AggregationController
I0118 18:44:44.391503    1119 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
I0118 18:44:44.393732    1119 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt"
I0118 18:44:44.395564    1119 customresource_discovery_controller.go:288] Starting DiscoveryController
I0118 18:44:44.395852    1119 apiservice_controller.go:97] Starting APIServiceRegistrationController
I0118 18:44:44.393779    1119 apf_controller.go:361] Starting API Priority and Fairness config controller
I0118 18:44:44.397374    1119 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0118 18:44:44.397659    1119 autoregister_controller.go:141] Starting autoregister controller
I0118 18:44:44.397779    1119 cache.go:32] Waiting for caches to sync for autoregister controller
I0118 18:44:44.397939    1119 controller.go:83] Starting OpenAPI AggregationController
I0118 18:44:44.398156    1119 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0118 18:44:44.405013    1119 controller.go:121] Starting legacy_token_tracking_controller
I0118 18:44:44.405138    1119 shared_informer.go:273] Waiting for caches to sync for configmaps
I0118 18:44:44.405431    1119 controller.go:85] Starting OpenAPI controller
I0118 18:44:44.405628    1119 controller.go:85] Starting OpenAPI V3 controller
I0118 18:44:44.405762    1119 naming_controller.go:291] Starting NamingConditionController
I0118 18:44:44.405923    1119 establishing_controller.go:76] Starting EstablishingController
I0118 18:44:44.406081    1119 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0118 18:44:44.406174    1119 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0118 18:44:44.406308    1119 crd_finalizer.go:266] Starting CRDFinalizer
I0118 18:44:44.406430    1119 crdregistration_controller.go:111] Starting crd-autoregister controller
I0118 18:44:44.406524    1119 shared_informer.go:273] Waiting for caches to sync for crd-autoregister
I0118 18:44:44.412293    1119 gc_controller.go:78] Starting apiserver lease garbage collector
I0118 18:44:44.416357    1119 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0118 18:44:44.416452    1119 shared_informer.go:273] Waiting for caches to sync for cluster_authentication_trust_controller
I0118 18:44:44.416662    1119 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key"
I0118 18:44:44.420892    1119 available_controller.go:494] Starting AvailableConditionController
I0118 18:44:44.421010    1119 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0118 18:44:44.446409    1119 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt"
I0118 18:44:44.446941    1119 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt"
I0118 18:44:44.599208    1119 apf_controller.go:366] Running API Priority and Fairness config worker
I0118 18:44:44.599369    1119 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
I0118 18:44:44.606684    1119 shared_informer.go:280] Caches are synced for crd-autoregister
I0118 18:44:44.609626    1119 shared_informer.go:280] Caches are synced for configmaps
I0118 18:44:44.617347    1119 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
I0118 18:44:44.621279    1119 cache.go:39] Caches are synced for AvailableConditionController controller
I0118 18:44:44.637055    1119 shared_informer.go:280] Caches are synced for node_authorizer
I0118 18:44:44.683042    1119 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
E0118 18:44:44.694943    1119 controller.go:189] failed to update lease, error: Operation cannot be fulfilled on leases.coordination.k8s.io "kube-apiserver-zig7fsk2ufcmduh7f7z4r6lh7u": StorageError: invalid object, Code: 4, Key: /registry/leases/kube-system/kube-apiserver-zig7fsk2ufcmduh7f7z4r6lh7u, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 7e6e957d-72d9-4763-906a-127350ec52c0, UID in object meta: 
I0118 18:44:44.698954    1119 cache.go:39] Caches are synced for autoregister controller
I0118 18:44:44.700334    1119 cache.go:39] Caches are synced for APIServiceRegistrationController controller
E0118 18:44:44.727131    1119 controller.go:163] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
INFO[0009] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error 
I0118 18:44:45.441025    1119 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
W0118 18:44:45.817595    1119 handler_proxy.go:106] no RequestInfo found in the context
E0118 18:44:45.817787    1119 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
I0118 18:44:45.817869    1119 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0118 18:44:45.817594    1119 handler_proxy.go:106] no RequestInfo found in the context
E0118 18:44:45.818144    1119 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0118 18:44:45.818956    1119 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
INFO[0010] Tunnel server egress proxy waiting for runtime core to become available 
INFO[0011] Waiting for cloud-controller-manager privileges to become available 
W0118 18:44:46.556563    1119 feature_gate.go:241] Setting GA feature gate JobTrackingWithFinalizers=true. It will be removed in a future release.
INFO[0011] Kube API server is now running               
INFO[0011] ETCD server is now running                   
INFO[0011] k3s is up and running                        
INFO[0011] Applying CRD addons.k3s.cattle.io            
INFO[0011] Applying CRD helmcharts.helm.cattle.io       
INFO[0011] Applying CRD helmchartconfigs.helm.cattle.io 
INFO[0011] Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-20.3.1+up20.3.0.tgz 
INFO[0011] Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-crd-20.3.1+up20.3.0.tgz 
INFO[0011] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml 
INFO[0011] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml 
INFO[0011] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml 
INFO[0011] Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml 
INFO[0011] Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml 
INFO[0011] Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml 
INFO[0011] Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml 
INFO[0011] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml 
INFO[0011] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml 
INFO[0011] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml 
INFO[0011] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml 
INFO[0011] Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml 
Flag --cloud-provider has been deprecated, will be removed in 1.25 or later, in favor of removing cloud provider code from Kubelet.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
I0118 18:44:47.446749    1119 server.go:197] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
I0118 18:44:47.490314    1119 server.go:407] "Kubelet version" kubeletVersion="v1.26.0+k3s1"
I0118 18:44:47.490516    1119 server.go:409] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0118 18:44:47.505536    1119 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt"
W0118 18:44:47.518993    1119 sysinfo.go:203] Nodes topology is not available, providing CPU topology
W0118 18:44:47.523460    1119 machine.go:65] Cannot read vendor id correctly, set empty.
I0118 18:44:47.541009    1119 server.go:654] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
I0118 18:44:47.563280    1119 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
I0118 18:44:47.564551    1119 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName:/k3s KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]}
I0118 18:44:47.565572    1119 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
I0118 18:44:47.566957    1119 container_manager_linux.go:308] "Creating device plugin manager"
I0118 18:44:47.568531    1119 state_mem.go:36] "Initialized new in-memory state store"
I0118 18:44:47.628853    1119 kubelet.go:398] "Attempting to sync node with API server"
I0118 18:44:47.629004    1119 kubelet.go:286] "Adding static pod path" path="/var/lib/rancher/k3s/agent/pod-manifests"
I0118 18:44:47.632874    1119 kubelet.go:297] "Adding apiserver pod source"
INFO[0012] Stopped tunnel to 127.0.0.1:6443             
INFO[0012] Connecting to proxy                           url="wss://192.168.1.100:6443/v1-k3s/connect"
INFO[0012] Proxy done                                    err="context canceled" url="wss://127.0.0.1:6443/v1-k3s/connect"
I0118 18:44:47.639381    1119 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
INFO[0012] error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF 
INFO[0012] Annotations and labels have been set successfully on node: m1 
INFO[0012] Starting flannel with backend vxlan          
INFO[0012] Handling backend connection request [m1]     
INFO[0012] Tunnel authorizer set Kubelet Port 10250     
I0118 18:44:47.689535    1119 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="v1.6.12-k3s1" apiVersion="v1"
INFO[0012] Flannel found PodCIDR assigned for node m1   
I0118 18:44:47.715198    1119 server.go:1181] "Started kubelet"
I0118 18:44:47.734378    1119 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
I0118 18:44:47.736143    1119 server.go:161] "Starting to listen" address="0.0.0.0" port=10250
I0118 18:44:47.741086    1119 server.go:451] "Adding debug handlers to kubelet server"
I0118 18:44:47.784092    1119 volume_manager.go:293] "Starting Kubelet Volume Manager"
I0118 18:44:47.790081    1119 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
E0118 18:44:47.795183    1119 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs"
E0118 18:44:47.795411    1119 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
INFO[0012] The interface wlan0 with ipv4 address 192.168.1.100 will be used by flannel 
E0118 18:44:47.827200    1119 memcache.go:255] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0118 18:44:47.838387    1119 kube.go:126] Waiting 10m0s for node controller to sync
I0118 18:44:47.850514    1119 kube.go:431] Starting kube subnet manager
E0118 18:44:47.873089    1119 memcache.go:255] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0118 18:44:47.886515    1119 container_manager_linux.go:945] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache"
I0118 18:44:47.979667    1119 kubelet_node_status.go:70] "Attempting to register node" node="m1"
E0118 18:44:47.992231    1119 memcache.go:106] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
INFO[0012] Starting /v1, Kind=Secret controller         
E0118 18:44:48.076993    1119 memcache.go:106] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
INFO[0012] Starting k3s.cattle.io/v1, Kind=Addon controller 
INFO[0012] Creating deploy event broadcaster            
INFO[0012] Updating TLS secret for kube-system/k3s-serving (count: 10): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-192.168.1.100:192.168.1.100 listener.cattle.io/cn-__1-f16284:::1 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/cn-m1:m1 listener.cattle.io/fingerprint:SHA1=8B3DD7B0AFE6E2785093B5B3789AB0DF676A220F] 
I0118 18:44:48.223759    1119 event.go:294] "Event occurred" object="kube-system/ccm" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/ccm.yaml\""
I0118 18:44:48.292989    1119 kubelet_node_status.go:108] "Node was previously registered" node="m1"
I0118 18:44:48.293739    1119 kubelet_node_status.go:73] "Successfully registered node" node="m1"
INFO[0012] Starting the netpol controller version v1.5.2-0.20221026101626-e01045262706, built on 2022-12-21T00:01:25Z, go1.19.4 
I0118 18:44:48.296470    1119 network_policy_controller.go:163] Starting network policy controller
I0118 18:44:48.343883    1119 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
I0118 18:44:48.403211    1119 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
E0118 18:44:48.458686    1119 memcache.go:255] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0118 18:44:48.462967    1119 setters.go:548] "Node became not ready" node="m1" condition={Type:Ready Status:False LastHeartbeatTime:2023-01-18 18:44:48.462712391 +0000 UTC m=+13.144141009 LastTransitionTime:2023-01-18 18:44:48.462712391 +0000 UTC m=+13.144141009 Reason:KubeletNotReady Message:[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]}
E0118 18:44:48.581158    1119 memcache.go:106] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
INFO[0013] Creating helm-controller event broadcaster   
I0118 18:44:48.770207    1119 apiserver.go:52] "Watching apiserver"
I0118 18:44:48.849863    1119 kube.go:133] Node controller sync successful
I0118 18:44:48.850843    1119 vxlan.go:138] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
FATA[0013] flannel exited: operation not supported
brandond commented 1 year ago

https://docs.k3s.io/advanced#raspberry-pi

Starting with Ubuntu 21.10, vxlan support on Raspberry Pi has been moved into a separate kernel module. sudo apt install linux-modules-extra-raspi