k3s-io / k3s

Lightweight Kubernetes
https://k3s.io
Apache License 2.0
28.08k stars 2.35k forks source link

k3s server crashes on startup with ipset v7.1: Cannot open session to kernel. #959

Closed hacker-h closed 4 years ago

hacker-h commented 5 years ago

Version: k3s version v0.10.0 (f9888ca3)

Describe the bug k3s server crashes on startup with "ipset v7.1: Cannot open session to kernel."

To Reproduce run the install.sh script and run k3s server

Expected behavior k3s should be starting and not crash Actual behavior k3s crashes Additional context The device is a coral dev board (aarch64) with 1GB RAM and 4x 1.5GHz Cores OS: Mendel GNU/Linux 3 (Chef) Linux Kernel: Linux xenial-shrimp 4.9.51-imx terminal output:

# k3s server
INFO[2019-10-25T16:14:23.021874359Z] Starting k3s v0.10.0 (f9888ca3)              
INFO[2019-10-25T16:14:23.030106985Z] Kine listening on unix://kine.sock           
INFO[2019-10-25T16:14:23.034439838Z] Fetching bootstrap data from etcd            
INFO[2019-10-25T16:14:23.112319123Z] Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key 
Flag --basic-auth-file has been deprecated, Basic authentication mode is deprecated and will be removed in a future release. It is not recommended for production environments.
I1025 16:14:23.114883   10226 server.go:650] external host was not specified, using 10.100.34.91
I1025 16:14:23.116001   10226 server.go:162] Version: v1.16.2-k3s.1
I1025 16:14:23.139064   10226 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I1025 16:14:23.139402   10226 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I1025 16:14:23.142489   10226 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I1025 16:14:23.142730   10226 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I1025 16:14:23.264858   10226 master.go:259] Using reconciler: lease
I1025 16:14:23.359099   10226 rest.go:115] the default service ipfamily for this cluster is: IPv4
W1025 16:14:24.544964   10226 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
W1025 16:14:24.621309   10226 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W1025 16:14:24.699489   10226 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1025 16:14:24.714405   10226 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1025 16:14:24.763767   10226 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1025 16:14:24.848953   10226 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W1025 16:14:24.849084   10226 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I1025 16:14:24.891012   10226 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I1025 16:14:24.891128   10226 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
INFO[2019-10-25T16:14:24.912033055Z] Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0 
INFO[2019-10-25T16:14:24.916101308Z] Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true 
I1025 16:14:24.936095   10226 controllermanager.go:161] Version: v1.16.2-k3s.1
I1025 16:14:24.937684   10226 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
I1025 16:14:24.939704   10226 server.go:143] Version: v1.16.2-k3s.1
I1025 16:14:24.940119   10226 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W1025 16:14:24.945219   10226 authorization.go:47] Authorization is disabled
W1025 16:14:24.945474   10226 authentication.go:79] Authentication is disabled
I1025 16:14:24.945646   10226 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I1025 16:14:34.069325   10226 secure_serving.go:123] Serving securely on 127.0.0.1:6444
I1025 16:14:34.069517   10226 available_controller.go:383] Starting AvailableConditionController
I1025 16:14:34.069590   10226 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I1025 16:14:34.069800   10226 controller.go:81] Starting OpenAPI AggregationController
I1025 16:14:34.071302   10226 apiservice_controller.go:94] Starting APIServiceRegistrationController
I1025 16:14:34.071413   10226 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1025 16:14:34.071511   10226 autoregister_controller.go:140] Starting autoregister controller
I1025 16:14:34.071539   10226 cache.go:32] Waiting for caches to sync for autoregister controller
I1025 16:14:34.072598   10226 crd_finalizer.go:274] Starting CRDFinalizer
I1025 16:14:34.075837   10226 controller.go:85] Starting OpenAPI controller
I1025 16:14:34.076246   10226 customresource_discovery_controller.go:208] Starting DiscoveryController
I1025 16:14:34.076721   10226 naming_controller.go:288] Starting NamingConditionController
I1025 16:14:34.077002   10226 establishing_controller.go:73] Starting EstablishingController
I1025 16:14:34.077297   10226 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I1025 16:14:34.077583   10226 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1025 16:14:34.137103   10226 crdregistration_controller.go:111] Starting crd-autoregister controller
I1025 16:14:34.137298   10226 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
E1025 16:14:34.262650   10226 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1025 16:14:34.263201   10226 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1025 16:14:34.263485   10226 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1025 16:14:34.263754   10226 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1025 16:14:34.264168   10226 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1025 16:14:34.264411   10226 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1025 16:14:34.264653   10226 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1025 16:14:34.264884   10226 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1025 16:14:34.265110   10226 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1025 16:14:34.265356   10226 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1025 16:14:34.265586   10226 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
INFO[2019-10-25T16:14:34.274158675Z] Running cloud-controller-manager --allocate-node-cidrs=true --allow-untagged-cloud=true --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --node-status-update-frequency=1m --secure-port=0 
Flag --allow-untagged-cloud has been deprecated, This flag is deprecated and will be removed in a future release. A cluster-id will be required on cloud instances.
I1025 16:14:34.278846   10226 cache.go:39] Caches are synced for AvailableConditionController controller
I1025 16:14:34.291702   10226 controllermanager.go:117] Version: v1.16.2-k3s.1
W1025 16:14:34.291862   10226 controllermanager.go:129] detected a cluster without a ClusterID.  A ClusterID will be required in the future.  Please tag your cluster to avoid any future issues
E1025 16:14:34.300465   10226 core.go:85] Failed to start service controller: the cloud provider does not support external load balancers
W1025 16:14:34.300612   10226 controllermanager.go:241] Skipping "service"
W1025 16:14:34.300652   10226 core.go:103] configure-cloud-routes is set, but cloud provider does not support routes. Will not configure cloud provider routes.
W1025 16:14:34.300685   10226 controllermanager.go:241] Skipping "route"
I1025 16:14:34.307486   10226 node_controller.go:71] Sending events to api server.
I1025 16:14:34.307843   10226 controllermanager.go:244] Started "cloud-node"
INFO[2019-10-25T16:14:34.311086630Z] Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.77.1.tgz 
INFO[2019-10-25T16:14:34.312697756Z] Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml 
INFO[2019-10-25T16:14:34.313479078Z] Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml 
INFO[2019-10-25T16:14:34.313960639Z] Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml 
INFO[2019-10-25T16:14:34.314266400Z] Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml 
INFO[2019-10-25T16:14:34.314537841Z] Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml 
I1025 16:14:34.317231   10226 node_lifecycle_controller.go:77] Sending events to api server
I1025 16:14:34.321917   10226 controllermanager.go:244] Started "cloud-node-lifecycle"
E1025 16:14:34.337341   10226 controller.go:158] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I1025 16:14:34.338303   10226 shared_informer.go:204] Caches are synced for crd-autoregister 
I1025 16:14:34.373163   10226 cache.go:39] Caches are synced for autoregister controller
I1025 16:14:34.374000   10226 cache.go:39] Caches are synced for APIServiceRegistrationController controller
INFO[2019-10-25T16:14:34.511729298Z] Listening on :6443                           
INFO[2019-10-25T16:14:34.716317338Z] Starting k3s.cattle.io/v1, Kind=Addon controller 
INFO[2019-10-25T16:14:34.716715139Z] Starting k3s.cattle.io/v1, Kind=ListenerConfig controller 
INFO[2019-10-25T16:14:34.717631582Z] Node token is available at /var/lib/rancher/k3s/server/node-token 
INFO[2019-10-25T16:14:34.717768863Z] To join node to cluster: k3s agent -s https://10.100.34.91:6443 -t ${NODE_TOKEN} 
INFO[2019-10-25T16:14:34.716496739Z] Waiting for master node  startup: node "" not found 
I1025 16:14:34.882087   10226 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
INFO[2019-10-25T16:14:34.932885375Z] Starting /v1, Kind=Endpoints controller      
INFO[2019-10-25T16:14:34.933278977Z] Starting helm.cattle.io/v1, Kind=HelmChart controller 
INFO[2019-10-25T16:14:34.933381937Z] Starting /v1, Kind=Pod controller            
INFO[2019-10-25T16:14:34.933412057Z] Starting batch/v1, Kind=Job controller       
INFO[2019-10-25T16:14:34.933440377Z] Starting /v1, Kind=Node controller           
INFO[2019-10-25T16:14:34.933467257Z] Starting /v1, Kind=Service controller        
I1025 16:14:34.975419   10226 controller.go:606] quota admission added evaluator for: helmcharts.helm.cattle.io
INFO[2019-10-25T16:14:35.054199275Z] Wrote kubeconfig /etc/rancher/k3s/k3s.yaml   
INFO[2019-10-25T16:14:35.054269715Z] Run: k3s kubectl                             
INFO[2019-10-25T16:14:35.054290955Z] k3s is up and running                        
I1025 16:14:35.069890   10226 controller.go:107] OpenAPI AggregationController: Processing item 
I1025 16:14:35.069995   10226 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1025 16:14:35.070086   10226 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1025 16:14:35.103700   10226 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
INFO[2019-10-25T16:14:35.251017050Z] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log 
INFO[2019-10-25T16:14:35.251356531Z] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd 
INFO[2019-10-25T16:14:35.271350034Z] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused" 
INFO[2019-10-25T16:14:36.272515804Z] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused" 
I1025 16:14:36.596223   10226 plugins.go:100] No cloud provider specified.
I1025 16:14:36.605289   10226 shared_informer.go:197] Waiting for caches to sync for tokens
I1025 16:14:36.707957   10226 shared_informer.go:204] Caches are synced for tokens 
I1025 16:14:36.944494   10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
I1025 16:14:36.944654   10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
I1025 16:14:36.944786   10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps
I1025 16:14:36.944908   10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps
I1025 16:14:36.944983   10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
I1025 16:14:36.945067   10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
I1025 16:14:36.945200   10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
I1025 16:14:36.945314   10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for listenerconfigs.k3s.cattle.io
I1025 16:14:36.945398   10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
I1025 16:14:36.945479   10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints
I1025 16:14:36.945560   10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
I1025 16:14:36.946024   10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps
I1025 16:14:36.946146   10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps
I1025 16:14:36.946224   10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for helmcharts.helm.cattle.io
I1025 16:14:36.946291   10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges
I1025 16:14:36.946386   10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
I1025 16:14:36.946451   10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch
I1025 16:14:36.946520   10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for addons.k3s.cattle.io
I1025 16:14:36.946725   10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts
I1025 16:14:36.946869   10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions
I1025 16:14:36.946958   10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps
I1025 16:14:36.947054   10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io
I1025 16:14:36.947141   10226 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
I1025 16:14:36.947242   10226 controllermanager.go:534] Started "resourcequota"
I1025 16:14:36.947273   10226 resource_quota_controller.go:271] Starting resource quota controller
I1025 16:14:36.947308   10226 shared_informer.go:197] Waiting for caches to sync for resource quota
I1025 16:14:36.947383   10226 resource_quota_monitor.go:303] QuotaMonitor running
I1025 16:14:36.996017   10226 controllermanager.go:534] Started "disruption"
I1025 16:14:36.996115   10226 disruption.go:333] Starting disruption controller
I1025 16:14:36.996542   10226 shared_informer.go:197] Waiting for caches to sync for disruption
I1025 16:14:37.021092   10226 controllermanager.go:534] Started "clusterrole-aggregation"
I1025 16:14:37.021542   10226 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
I1025 16:14:37.021642   10226 shared_informer.go:197] Waiting for caches to sync for ClusterRoleAggregator
I1025 16:14:37.045776   10226 controllermanager.go:534] Started "endpoint"
I1025 16:14:37.045847   10226 endpoints_controller.go:176] Starting endpoint controller
I1025 16:14:37.045922   10226 shared_informer.go:197] Waiting for caches to sync for endpoint
I1025 16:14:37.144303   10226 controllermanager.go:534] Started "horizontalpodautoscaling"
I1025 16:14:37.144579   10226 horizontal.go:156] Starting HPA controller
I1025 16:14:37.144664   10226 shared_informer.go:197] Waiting for caches to sync for HPA
E1025 16:14:37.169363   10226 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W1025 16:14:37.169418   10226 controllermanager.go:526] Skipping "service"
I1025 16:14:37.193822   10226 controllermanager.go:534] Started "serviceaccount"
I1025 16:14:37.193916   10226 serviceaccounts_controller.go:116] Starting service account controller
I1025 16:14:37.194277   10226 shared_informer.go:197] Waiting for caches to sync for service account
I1025 16:14:37.223736   10226 controllermanager.go:534] Started "job"
I1025 16:14:37.223827   10226 job_controller.go:143] Starting job controller
I1025 16:14:37.223907   10226 shared_informer.go:197] Waiting for caches to sync for job
I1025 16:14:37.256308   10226 controllermanager.go:534] Started "replicaset"
W1025 16:14:37.256417   10226 controllermanager.go:513] "endpointslice" is disabled
I1025 16:14:37.256367   10226 replica_set.go:182] Starting replicaset controller
I1025 16:14:37.256669   10226 shared_informer.go:197] Waiting for caches to sync for ReplicaSet
INFO[2019-10-25T16:14:37.278092427Z] module br_netfilter was already loaded       
INFO[2019-10-25T16:14:37.278611548Z] module overlay was already loaded            
INFO[2019-10-25T16:14:37.278732149Z] module nf_conntrack was already loaded       
INFO[2019-10-25T16:14:37.298179169Z] Connecting to proxy                           url="wss://10.100.34.91:6443/v1-k3s/connect"
INFO[2019-10-25T16:14:37.306763036Z] Handling backend connection request [xenial-shrimp] 
WARN[2019-10-25T16:14:37.310168407Z] Disabling CPU quotas due to missing cpu.cfs_period_us 
INFO[2019-10-25T16:14:37.310702288Z] Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/807a6004690ef344efe993a6867bb3dacbb1ffaa584eedbe54bb206743c599cf/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=xenial-shrimp --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/systemd/user.slice/user-1000.slice --node-labels= --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --runtime-cgroups=/systemd/user.slice/user-1000.slice --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key 
W1025 16:14:37.311320   10226 server.go:208] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
I1025 16:14:37.390558   10226 server.go:406] Version: v1.16.2-k3s.1
W1025 16:14:37.428539   10226 proxier.go:597] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1025 16:14:37.451726   10226 proxier.go:597] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
I1025 16:14:37.485111   10226 controllermanager.go:534] Started "ttl"
I1025 16:14:37.485757   10226 ttl_controller.go:116] Starting TTL controller
I1025 16:14:37.485906   10226 shared_informer.go:197] Waiting for caches to sync for TTL
I1025 16:14:37.486195   10226 flannel.go:91] Determining IP address of default interface
I1025 16:14:37.486957   10226 flannel.go:104] Using interface with name eth0 and address 10.100.34.91
I1025 16:14:37.491210   10226 kube.go:117] Waiting 10m0s for node controller to sync
I1025 16:14:37.491521   10226 kube.go:300] Starting kube subnet manager
W1025 16:14:37.499783   10226 proxier.go:597] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
I1025 16:14:37.522530   10226 server.go:637] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
I1025 16:14:37.523327   10226 container_manager_linux.go:272] container manager verified user specified cgroup-root exists: []
I1025 16:14:37.523442   10226 container_manager_linux.go:277] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/systemd/user.slice/user-1000.slice SystemCgroupsName: KubeletCgroupsName:/systemd/user.slice/user-1000.slice ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:false CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
I1025 16:14:37.523796   10226 fake_topology_manager.go:29] [fake topologymanager] NewFakeManager
I1025 16:14:37.523871   10226 container_manager_linux.go:312] Creating device plugin manager: true
I1025 16:14:37.523994   10226 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider:  &{kubelet.sock /var/lib/rancher/k3s/agent/kubelet/device-plugins/ map[] {0 0} <nil> {{} [0 0 0]} 0x25f7550 0x614f9b8 0x25f7d20 map[] map[] map[] map[] map[] 0x40066bec00 [] 0x614f9b8}
I1025 16:14:37.524147   10226 state_mem.go:36] [cpumanager] initializing new in-memory state store
I1025 16:14:37.524464   10226 state_mem.go:84] [cpumanager] updated default cpuset: ""
I1025 16:14:37.524548   10226 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
I1025 16:14:37.524634   10226 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider:  &{{0 0} 0x614f9b8 10000000000 0x40077748a0 <nil> <nil> <nil> <nil> map[]}
I1025 16:14:37.524946   10226 kubelet.go:312] Watching apiserver
W1025 16:14:37.559316   10226 proxier.go:597] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
INFO[2019-10-25T16:14:37.602768081Z] addresses labels has already been set succesfully on node: xenial-shrimp 
I1025 16:14:37.636676   10226 kuberuntime_manager.go:207] Container runtime containerd initialized, version: v1.3.0-k3s.2, apiVersion: v1alpha2
I1025 16:14:37.637926   10226 server.go:1066] Started kubelet
I1025 16:14:37.653004   10226 node_ipam_controller.go:94] Sending events to api server.
I1025 16:14:37.653455   10226 server.go:145] Starting to listen on 0.0.0.0:10250
I1025 16:14:37.656905   10226 server.go:354] Adding debug handlers to kubelet server.
I1025 16:14:37.662016   10226 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
I1025 16:14:37.662189   10226 status_manager.go:156] Starting to sync pod status with apiserver
I1025 16:14:37.662395   10226 kubelet.go:1822] Starting kubelet main sync loop.
I1025 16:14:37.662486   10226 kubelet.go:1839] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I1025 16:14:37.667192   10226 volume_manager.go:249] Starting Kubelet Volume Manager
I1025 16:14:37.698503   10226 desired_state_of_world_populator.go:131] Desired state populator starts to run
E1025 16:14:37.735242   10226 cri_stats_provider.go:375] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache.
E1025 16:14:37.735463   10226 kubelet.go:1302] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
I1025 16:14:37.787227   10226 kubelet.go:1839] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I1025 16:14:37.787396   10226 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
I1025 16:14:37.808085   10226 kuberuntime_manager.go:961] updating runtime config through cri with podcidr 10.42.0.0/24
I1025 16:14:37.818028   10226 node.go:135] Successfully retrieved node IP: 10.100.34.91
I1025 16:14:37.818124   10226 server_others.go:150] Using iptables Proxier.
I1025 16:14:37.822690   10226 kubelet_network.go:77] Setting Pod CIDR:  -> 10.42.0.0/24
I1025 16:14:37.826993   10226 kubelet_node_status.go:72] Attempting to register node xenial-shrimp
E1025 16:14:37.830120   10226 kubelet_network_linux.go:57] Failed to ensure marking rule for KUBE-MARK-DROP: error appending rule: exit status 1: iptables: No chain/target/match by that name.
I1025 16:14:37.856071   10226 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
FATA[2019-10-25T16:14:37.861994691Z] ipset v6.30: Cannot open session to kernel.

syslog:

k3s[11482]: time="2019-10-25T14:00:07.518271281Z" level=info msg="Starting k3s v0.10.0 (f9888ca3)"
Oct 25 14:00:07 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:07.534233287Z" level=info msg="Kine listening on unix://kine.sock"
Oct 25 14:00:07 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:07.535872843Z" level=info msg="Fetching bootstrap data from etcd"
Oct 25 14:00:07 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:07.632438434Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Oct 25 14:00:07 xenial-shrimp k3s[11482]: Flag --basic-auth-file has been deprecated, Basic authentication mode is deprecated and will be removed in a future release. It is not recommended for production environments.
Oct 25 14:00:07 xenial-shrimp k3s[11482]: I1025 14:00:07.637571   11482 server.go:650] external host was not specified, using 10.100.36.171
Oct 25 14:00:07 xenial-shrimp k3s[11482]: I1025 14:00:07.639564   11482 server.go:162] Version: v1.16.2-k3s.1
Oct 25 14:00:07 xenial-shrimp k3s[11482]: I1025 14:00:07.660236   11482 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
Oct 25 14:00:07 xenial-shrimp k3s[11482]: I1025 14:00:07.661276   11482 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
Oct 25 14:00:07 xenial-shrimp k3s[11482]: I1025 14:00:07.664870   11482 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
Oct 25 14:00:07 xenial-shrimp k3s[11482]: I1025 14:00:07.665765   11482 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
Oct 25 14:00:07 xenial-shrimp k3s[11482]: I1025 14:00:07.774506   11482 master.go:259] Using reconciler: lease
Oct 25 14:00:07 xenial-shrimp k3s[11482]: I1025 14:00:07.874219   11482 rest.go:115] the default service ipfamily for this cluster is: IPv4
Oct 25 14:00:08 xenial-shrimp k3s[11482]: W1025 14:00:08.987641   11482 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
Oct 25 14:00:09 xenial-shrimp k3s[11482]: W1025 14:00:09.111903   11482 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Oct 25 14:00:09 xenial-shrimp k3s[11482]: W1025 14:00:09.190186   11482 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Oct 25 14:00:09 xenial-shrimp k3s[11482]: W1025 14:00:09.204780   11482 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Oct 25 14:00:09 xenial-shrimp k3s[11482]: W1025 14:00:09.253145   11482 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Oct 25 14:00:09 xenial-shrimp k3s[11482]: W1025 14:00:09.335838   11482 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
Oct 25 14:00:09 xenial-shrimp k3s[11482]: W1025 14:00:09.335960   11482 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
Oct 25 14:00:09 xenial-shrimp k3s[11482]: I1025 14:00:09.376907   11482 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
Oct 25 14:00:09 xenial-shrimp k3s[11482]: I1025 14:00:09.377030   11482 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
Oct 25 14:00:09 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:09.403295107Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0"
Oct 25 14:00:09 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:09.411088010Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Oct 25 14:00:09 xenial-shrimp k3s[11482]: I1025 14:00:09.424191   11482 controllermanager.go:161] Version: v1.16.2-k3s.1
Oct 25 14:00:09 xenial-shrimp k3s[11482]: I1025 14:00:09.425660   11482 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
Oct 25 14:00:09 xenial-shrimp k3s[11482]: I1025 14:00:09.430550   11482 server.go:143] Version: v1.16.2-k3s.1
Oct 25 14:00:09 xenial-shrimp k3s[11482]: I1025 14:00:09.430823   11482 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
Oct 25 14:00:09 xenial-shrimp k3s[11482]: W1025 14:00:09.435856   11482 authorization.go:47] Authorization is disabled
Oct 25 14:00:09 xenial-shrimp k3s[11482]: W1025 14:00:09.435955   11482 authentication.go:79] Authentication is disabled
Oct 25 14:00:09 xenial-shrimp k3s[11482]: I1025 14:00:09.435984   11482 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.471459   11482 secure_serving.go:123] Serving securely on 127.0.0.1:6444
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.472602   11482 apiservice_controller.go:94] Starting APIServiceRegistrationController
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.472693   11482 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.472760   11482 available_controller.go:383] Starting AvailableConditionController
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.472777   11482 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.473221   11482 crd_finalizer.go:274] Starting CRDFinalizer
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.476504   11482 autoregister_controller.go:140] Starting autoregister controller
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.476546   11482 cache.go:32] Waiting for caches to sync for autoregister controller
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.484739   11482 controller.go:85] Starting OpenAPI controller
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.484935   11482 customresource_discovery_controller.go:208] Starting DiscoveryController
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.485015   11482 naming_controller.go:288] Starting NamingConditionController
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.485083   11482 establishing_controller.go:73] Starting EstablishingController
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.485149   11482 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.485213   11482 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.486620   11482 crdregistration_controller.go:111] Starting crd-autoregister controller
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.486650   11482 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.476505   11482 controller.go:81] Starting OpenAPI AggregationController
Oct 25 14:00:18 xenial-shrimp k3s[11482]: E1025 14:00:18.546404   11482 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
Oct 25 14:00:18 xenial-shrimp k3s[11482]: E1025 14:00:18.552092   11482 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
Oct 25 14:00:18 xenial-shrimp k3s[11482]: E1025 14:00:18.553287   11482 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
Oct 25 14:00:18 xenial-shrimp k3s[11482]: E1025 14:00:18.554298   11482 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
Oct 25 14:00:18 xenial-shrimp k3s[11482]: E1025 14:00:18.555111   11482 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
Oct 25 14:00:18 xenial-shrimp k3s[11482]: E1025 14:00:18.555858   11482 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
Oct 25 14:00:18 xenial-shrimp k3s[11482]: E1025 14:00:18.556584   11482 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
Oct 25 14:00:18 xenial-shrimp k3s[11482]: E1025 14:00:18.557310   11482 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
Oct 25 14:00:18 xenial-shrimp k3s[11482]: E1025 14:00:18.558024   11482 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
Oct 25 14:00:18 xenial-shrimp k3s[11482]: E1025 14:00:18.558788   11482 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
Oct 25 14:00:18 xenial-shrimp k3s[11482]: E1025 14:00:18.559527   11482 reflector.go:123] k8s.io/client-go@v1.16.2-k3s.1/tools/cache/reflector.go:96: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
Oct 25 14:00:18 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:18.619392084Z" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --allow-untagged-cloud=true --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --node-status-update-frequency=1m --secure-port=0"
Oct 25 14:00:18 xenial-shrimp k3s[11482]: Flag --allow-untagged-cloud has been deprecated, This flag is deprecated and will be removed in a future release. A cluster-id will be required on cloud instances.
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.651785   11482 controllermanager.go:117] Version: v1.16.2-k3s.1
Oct 25 14:00:18 xenial-shrimp k3s[11482]: W1025 14:00:18.651912   11482 controllermanager.go:129] detected a cluster without a ClusterID.  A ClusterID will be required in the future.  Please tag your cluster to avoid any future issues
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.660305   11482 node_lifecycle_controller.go:77] Sending events to api server
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.660483   11482 controllermanager.go:244] Started "cloud-node-lifecycle"
Oct 25 14:00:18 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:18.666401499Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.77.1.tgz"
Oct 25 14:00:18 xenial-shrimp k3s[11482]: E1025 14:00:18.668609   11482 core.go:85] Failed to start service controller: the cloud provider does not support external load balancers
Oct 25 14:00:18 xenial-shrimp k3s[11482]: W1025 14:00:18.668709   11482 controllermanager.go:241] Skipping "service"
Oct 25 14:00:18 xenial-shrimp k3s[11482]: W1025 14:00:18.668737   11482 core.go:103] configure-cloud-routes is set, but cloud provider does not support routes. Will not configure cloud provider routes.
Oct 25 14:00:18 xenial-shrimp k3s[11482]: W1025 14:00:18.668755   11482 controllermanager.go:241] Skipping "route"
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.675272   11482 node_controller.go:71] Sending events to api server.
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.675448   11482 controllermanager.go:244] Started "cloud-node"
Oct 25 14:00:18 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:18.676765996Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml"
Oct 25 14:00:18 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:18.677583555Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
Oct 25 14:00:18 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:18.678223033Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml"
Oct 25 14:00:18 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:18.678727872Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
Oct 25 14:00:18 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:18.679033271Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.682062   11482 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.683725   11482 cache.go:39] Caches are synced for autoregister controller
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.695167   11482 cache.go:39] Caches are synced for AvailableConditionController controller
Oct 25 14:00:18 xenial-shrimp k3s[11482]: I1025 14:00:18.697338   11482 shared_informer.go:204] Caches are synced for crd-autoregister
Oct 25 14:00:18 xenial-shrimp k3s[11482]: E1025 14:00:18.724270   11482 node_controller.go:148] NodeAddress: Error fetching by providerID: unimplemented Error fetching by NodeName: Failed to find node xenial-shrimp: node "xenial-shrimp" not found
Oct 25 14:00:18 xenial-shrimp k3s[11482]: E1025 14:00:18.841435   11482 controller.go:158] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
Oct 25 14:00:18 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:18.855913719Z" level=info msg="Listening on :6443"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.157591728Z" level=info msg="Starting k3s.cattle.io/v1, Kind=ListenerConfig controller"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.158539366Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.162847477Z" level=info msg="Waiting for master node  startup: node \"\" not found"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.164638593Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/node-token"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.164783312Z" level=info msg="To join node to cluster: k3s agent -s https://10.100.36.171:6443 -t ${NODE_TOKEN}"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: I1025 14:00:19.271677   11482 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.362110753Z" level=info msg="Starting /v1, Kind=Endpoints controller"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.362448673Z" level=info msg="Starting /v1, Kind=Pod controller"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.362170753Z" level=info msg="Starting /v1, Kind=Service controller"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.362207353Z" level=info msg="Starting /v1, Kind=Node controller"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.363283751Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.365188986Z" level=info msg="Starting batch/v1, Kind=Job controller"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: I1025 14:00:19.416274   11482 controller.go:606] quota admission added evaluator for: helmcharts.helm.cattle.io
Oct 25 14:00:19 xenial-shrimp k3s[11482]: I1025 14:00:19.471660   11482 controller.go:107] OpenAPI AggregationController: Processing item
Oct 25 14:00:19 xenial-shrimp k3s[11482]: I1025 14:00:19.472659   11482 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
Oct 25 14:00:19 xenial-shrimp k3s[11482]: I1025 14:00:19.473418   11482 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
Oct 25 14:00:19 xenial-shrimp k3s[11482]: I1025 14:00:19.492432   11482 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.623724811Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.623862211Z" level=info msg="Run: k3s kubectl"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.623888611Z" level=info msg="k3s is up and running"
Oct 25 14:00:19 xenial-shrimp systemd[1]: Started Lightweight Kubernetes.
Oct 25 14:00:19 xenial-shrimp k3s[11482]: I1025 14:00:19.775366   11482 controller.go:606] quota admission added evaluator for: events.events.k8s.io
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.821790851Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.823209967Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
Oct 25 14:00:19 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:19.824555765Z" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused\""
Oct 25 14:00:20 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:20.829722844Z" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused\""
Oct 25 14:00:20 xenial-shrimp k3s[11482]: I1025 14:00:20.943276   11482 plugins.go:100] No cloud provider specified.
Oct 25 14:00:20 xenial-shrimp k3s[11482]: W1025 14:00:20.950266   11482 controllermanager.go:513] "bootstrapsigner" is disabled
Oct 25 14:00:20 xenial-shrimp k3s[11482]: W1025 14:00:20.950403   11482 controllermanager.go:513] "tokencleaner" is disabled
Oct 25 14:00:20 xenial-shrimp k3s[11482]: I1025 14:00:20.951526   11482 shared_informer.go:197] Waiting for caches to sync for tokens
Oct 25 14:00:20 xenial-shrimp k3s[11482]: I1025 14:00:20.999731   11482 node_ipam_controller.go:94] Sending events to api server.
Oct 25 14:00:21 xenial-shrimp k3s[11482]: I1025 14:00:21.052406   11482 shared_informer.go:204] Caches are synced for tokens
Oct 25 14:00:21 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:21.834017762Z" level=info msg="module br_netfilter was already loaded"
Oct 25 14:00:21 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:21.834309001Z" level=info msg="module overlay was already loaded"
Oct 25 14:00:21 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:21.834354961Z" level=info msg="module nf_conntrack was already loaded"
Oct 25 14:00:21 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:21.853657758Z" level=info msg="Connecting to proxy" url="wss://10.100.36.171:6443/v1-k3s/connect"
Oct 25 14:00:21 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:21.861900779Z" level=info msg="Handling backend connection request [xenial-shrimp]"
Oct 25 14:00:21 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:21.865070572Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"
Oct 25 14:00:21 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:21.866219330Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/807a6004690ef344efe993a6867bb3dacbb1ffaa584eedbe54bb206743c599cf/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=xenial-shrimp --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/systemd/system.slice --node-labels= --read-only-port=0 --resolv-conf=/etc/resolv.conf --runtime-cgroups=/systemd/system.slice --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
Oct 25 14:00:21 xenial-shrimp k3s[11482]: W1025 14:00:21.912832   11482 server.go:208] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
Oct 25 14:00:21 xenial-shrimp systemd[1]: Started Kubernetes systemd probe.
Oct 25 14:00:21 xenial-shrimp k3s[11482]: I1025 14:00:21.958174   11482 server.go:406] Version: v1.16.2-k3s.1
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.021837   11482 flannel.go:91] Determining IP address of default interface
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.028090   11482 flannel.go:104] Using interface with name wlan0 and address 10.100.36.171
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.036694   11482 kube.go:117] Waiting 10m0s for node controller to sync
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.036931   11482 kube.go:300] Starting kube subnet manager
Oct 25 14:00:22 xenial-shrimp k3s[11482]: W1025 14:00:22.045977   11482 proxier.go:597] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
Oct 25 14:00:22 xenial-shrimp k3s[11482]: W1025 14:00:22.102723   11482 proxier.go:597] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.140744   11482 server.go:637] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.142921   11482 container_manager_linux.go:272] container manager verified user specified cgroup-root exists: []
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.143976   11482 container_manager_linux.go:277] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/systemd/system.slice SystemCgroupsName: KubeletCgroupsName:/systemd/system.slice ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:false CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.145583   11482 fake_topology_manager.go:29] [fake topologymanager] NewFakeManager
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.146489   11482 container_manager_linux.go:312] Creating device plugin manager: true
Oct 25 14:00:22 xenial-shrimp k3s[11482]: W1025 14:00:22.146846   11482 proxier.go:597] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.147335   11482 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider:  &{kubelet.sock /var/lib/rancher/k3s/agent/kubelet/device-plugins/ map[] {0 0} <nil> {{} [0 0 0]} 0x25f7550 0x614f9b8 0x25f7d20 map[] map[] map[] map[] map[] 0x4006695f50 [] 0x614f9b8}
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.166494   11482 state_mem.go:36] [cpumanager] initializing new in-memory state store
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.177055   11482 state_mem.go:84] [cpumanager] updated default cpuset: ""
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.178936   11482 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.180740   11482 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider:  &{{0 0} 0x614f9b8 10000000000 0x4005e5fe60 <nil> <nil> <nil> <nil> map[]}
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.187181   11482 kubelet.go:312] Watching apiserver
Oct 25 14:00:22 xenial-shrimp k3s[11482]: W1025 14:00:22.221438   11482 proxier.go:597] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
Oct 25 14:00:22 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:22.268661829Z" level=info msg="addresses labels has already been set succesfully on node: xenial-shrimp"
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.331384   11482 kuberuntime_manager.go:207] Container runtime containerd initialized, version: v1.3.0-k3s.2, apiVersion: v1alpha2
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.332410   11482 server.go:1066] Started kubelet
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.377435   11482 server.go:145] Starting to listen on 0.0.0.0:10250
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.400269   11482 server.go:354] Adding debug handlers to kubelet server.
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.404303   11482 node.go:135] Successfully retrieved node IP: 10.100.36.171
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.404439   11482 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.404454   11482 server_others.go:150] Using iptables Proxier.
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.404483   11482 status_manager.go:156] Starting to sync pod status with apiserver
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.404523   11482 kubelet.go:1822] Starting kubelet main sync loop.
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.404572   11482 kubelet.go:1839] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
Oct 25 14:00:22 xenial-shrimp k3s[11482]: E1025 14:00:22.404975   11482 cri_stats_provider.go:375] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache.
Oct 25 14:00:22 xenial-shrimp k3s[11482]: E1025 14:00:22.405016   11482 kubelet.go:1302] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
Oct 25 14:00:22 xenial-shrimp k3s[11482]: I1025 14:00:22.409566   11482 volume_manager.go:249] Starting Kubelet Volume Manager
Oct 25 14:00:22 xenial-shrimp k3s[11482]: time="2019-10-25T14:00:22.468459462Z" level=fatal msg="ipset v7.1: Cannot open session to kernel.\n"
Oct 25 14:00:22 xenial-shrimp systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Oct 25 14:00:22 xenial-shrimp systemd[1]: k3s.service: Unit entered failed state.
Oct 25 14:00:22 xenial-shrimp systemd[1]: k3s.service: Failed with result 'exit-code'.
Oct 25 14:00:27 xenial-shrimp systemd[1]: k3s.service: Service hold-off time over, scheduling restart.
Oct 25 14:00:27 xenial-shrimp systemd[1]: Stopped Lightweight Kubernetes.

I tried to replace ipset and iptables with different versions, however this results in the same error.

hacker-h commented 5 years ago

This is probably related to https://github.com/f0cal/google-coral/issues/55 and https://github.com/f0cal/google-coral/issues/56 However the error message of k3s server should be clarified

jasonbarbee commented 4 years ago

I'm having this same problem on 1.0.0 and 0.10.2. I had to backtrack to 0.9.1 to get past this. For anyone that has this problem the command is this curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v0.9.1 sh -

I do not have a Google Coral, I am on a Jetson TX2 Nano. I was running a Nano image from August (lost the version details), and thought I would try the latest after running into this bug - so I am on the Nano JetPack v4.2.2 now - Linux jetson 4.9.140-tegra #1 SMP PREEMPT Sat Oct 19 15:54:06 PDT 2019 aarch64 aarch64 aarch64 GNU/Linux

Distributor ID: Ubuntu Description: Ubuntu 18.04.3 LTS Release: 18.04 Codename: bionic

perrywang commented 4 years ago

Also experiencing this error on nano environment. And 0.9.1 is working fine.

perrywang commented 4 years ago

is there any hints where is this problem introduced so as we can adopt GA version?

erikwilson commented 4 years ago

Of interesting note is: Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache. and: Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem

Would you mind sharing the output of lsmod and df -h?

erikwilson commented 4 years ago

Sorry, those errors are probably normal. The output of lsmod might help tho, it sounds like some kind of kernel issue.

segator commented 4 years ago

I have the same issue but with unraid OS distribution 6.8, probably we miss some kernel modules, if you tell me what I'm missing I will try to build them.

Module                  Size  Used by
wireguard             208896  0
ip6_udp_tunnel         16384  1 wireguard
udp_tunnel             16384  1 wireguard
ccp                    69632  0
xt_nat                 16384  10
veth                   20480  0
dm_mod                110592  1
dax                    20480  1 dm_mod
xt_CHECKSUM            16384  1
ipt_REJECT             16384  7
ip6table_mangle        16384  1
ip6table_nat           16384  1
nf_nat_ipv6            16384  1 ip6table_nat
iptable_mangle         16384  1
ip6table_filter        16384  1
ip6_tables             24576  3 ip6table_filter,ip6table_nat,ip6table_mangle
vhost_net              24576  2
tun                    36864  6 vhost_net
vhost                  32768  1 vhost_net
tap                    20480  1 vhost_net
ipt_MASQUERADE         16384  6
iptable_filter         16384  1
iptable_nat            16384  1
nf_nat_ipv4            16384  2 ipt_MASQUERADE,iptable_nat
nf_nat                 24576  3 nf_nat_ipv6,nf_nat_ipv4,xt_nat
ip_tables              24576  3 iptable_filter,iptable_nat,iptable_mangle
xfs                   663552  27
nfsd                   90112  11
lockd                  73728  1 nfsd
grace                  16384  1 lockd
sunrpc                204800  15 nfsd,lockd
md_mod                 49152  27
ipmi_devintf           16384  0
bonding               110592  0
igb                   172032  0
i2c_algo_bit           16384  1 igb
sb_edac                24576  0
x86_pkg_temp_thermal    16384  0
intel_powerclamp       16384  0
coretemp               16384  0
kvm_intel             196608  52
kvm                   380928  1 kvm_intel
crct10dif_pclmul       16384  0
crc32_pclmul           16384  0
crc32c_intel           24576  0
ghash_clmulni_intel    16384  0
pcbc                   16384  0
aesni_intel           200704  0
aes_x86_64             20480  1 aesni_intel
crypto_simd            16384  1 aesni_intel
cryptd                 20480  3 crypto_simd,ghash_clmulni_intel,aesni_intel
glue_helper            16384  1 aesni_intel
isci                   98304  0
libsas                 69632  1 isci
ipmi_ssif              24576  0
i2c_i801               24576  0
i2c_core               40960  4 i2c_algo_bit,igb,i2c_i801,ipmi_ssif
intel_cstate           16384  0
intel_uncore          102400  0
intel_rapl_perf        16384  0
ahci                   40960  1
scsi_transport_sas     32768  2 isci,libsas
aacraid               110592  28
libahci                28672  1 ahci
wmi                    20480  0
pcc_cpufreq            16384  0
ipmi_si                53248  0
button                 16384  0
rootfs           24G  1.2G   23G   5% /
tmpfs            32M  608K   32M   2% /run
devtmpfs         24G     0   24G   0% /dev
tmpfs            24G     0   24G   0% /dev/shm
cgroup_root     8.0M     0  8.0M   0% /sys/fs/cgroup
tmpfs           128M  2.8M  126M   3% /var/log
/dev/sda1        15G  715M   14G   5% /boot
/dev/loop0      9.0M  9.0M     0 100% /lib/modules
/dev/loop1      5.9M  5.9M     0 100% /lib/firmware
tmpfs           1.0M     0  1.0M   0% /mnt/disks
xxxx here personal disks xxxxx
/dev/sdad1      699G  338G  360G  49% /mnt/cache
/dev/loop2       20G  6.1G   13G  33% /var/lib/docker
shm              64M     0   64M   0% /var/lib/docker/containers/25b88a33e9f487743dc4f2ac6e3cefbee362b16308a3d241f915354ada96e374/mounts/shm
/dev/loop3      1.0G   18M  904M   2% /etc/libvirt
tmpfs            24G  8.0K   24G   1% /var/lib/kubelet/pods/acb69046-9770-4c60-93d5-beb874cb8f5e/volumes/kubernetes.io~secret/cattle-credentials
tmpfs            24G   12K   24G   1% /var/lib/kubelet/pods/acb69046-9770-4c60-93d5-beb874cb8f5e/volumes/kubernetes.io~secret/cattle-token-q9xzj
hacker-h commented 4 years ago

@segator did you try to run the kubernetes preflight checks e.g. with kubeadm for more information?

segator commented 4 years ago

No sorry I didn't tried this is the report, anyway the errors mentioned should not have any effect, unraid OS doesn't have systemctl is an ultralight os that runs on USB pendrive/ram. the preflight also mark as error the docker graph btrfs, but this shoudn't be a problem. I just tried another node (non unraid) with btrfs and works. I also want to mentioned I tried k3d and same exactly issue.

kubeadm init
W1214 21:57:35.321484   32465 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1214 21:57:35.321543   32465 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
        [WARNING Firewalld]: no supported init system detected, skipping checking for services
        [WARNING Service-Docker]: no supported init system detected, skipping checking for services
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.19.88-Unraid
CONFIG_NAMESPACES: enabled
CONFIG_NET_NS: enabled
CONFIG_PID_NS: enabled
CONFIG_IPC_NS: enabled
CONFIG_UTS_NS: enabled
CONFIG_CGROUPS: enabled
CONFIG_CGROUP_CPUACCT: enabled
CONFIG_CGROUP_DEVICE: enabled
CONFIG_CGROUP_FREEZER: enabled
CONFIG_CGROUP_SCHED: enabled
CONFIG_CPUSETS: enabled
CONFIG_MEMCG: enabled
CONFIG_INET: enabled
CONFIG_EXT4_FS: enabled (as module)
CONFIG_PROC_FS: enabled
CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled (as module)
CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
CONFIG_OVERLAY_FS: enabled
CONFIG_AUFS_FS: not set - Required for aufs.
CONFIG_BLK_DEV_DM: enabled (as module)
DOCKER_VERSION: 19.03.5
DOCKER_GRAPH_DRIVER: btrfs
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
        [WARNING Service-Kubelet]: no supported init system detected, skipping checking for services
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR SystemVerification]: unsupported graph driver: btrfs
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
segator commented 4 years ago

I just notice my error is a little bit diferent

ipset v7.1: Kernel error received: Invalid argument
perrywang commented 4 years ago

ONE thing is that is not happen on 0.9.1. Missing some kernel modules seems not a right way at the moment.

segator commented 4 years ago

I got this working loading ip_set & xt_conntrack and some ip_vs modules, but anyway there are some errors but seems everyhing is working on unraid now :) But it doesn't work on docker, it seems the docker trying to go to /proc and get some fails but are nto related with this issue.

JorisBolsens commented 4 years ago
$ k3s -v
k3s version v1.17.0+k3s.1 (0f644650)

I am running into this problem as well, however I am using k3s agent. Master runs fine (different hardware).

INFO[2020-01-07T05:31:05.082368999Z] Starting k3s agent v1.0.1 (e94a3c60)         
INFO[2020-01-07T05:31:05.083576112Z] module overlay was already loaded            
WARN[2020-01-07T05:31:05.113123581Z] failed to start nf_conntrack module          
WARN[2020-01-07T05:31:05.142718687Z] failed to start br_netfilter module          
WARN[2020-01-07T05:31:05.143606876Z] failed to write value 1 at /proc/sys/net/ipv6/conf/all/forwarding: open /proc/sys/net/ipv6/conf/all/forwarding: no such file or directory 
WARN[2020-01-07T05:31:05.144115137Z] failed to write value 1 at /proc/sys/net/bridge/bridge-nf-call-iptables: open /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory 
WARN[2020-01-07T05:31:05.144604616Z] failed to write value 1 at /proc/sys/net/bridge/bridge-nf-call-ip6tables: open /proc/sys/net/bridge/bridge-nf-call-ip6tables: no such file or directory 
INFO[2020-01-07T05:31:05.146224571Z] Running load balancer 127.0.0.1:34309 -> [192.168.1.80:6443] 
INFO[2020-01-07T05:31:06.682951837Z] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log 
INFO[2020-01-07T05:31:06.683324510Z] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd 
INFO[2020-01-07T05:31:06.690110701Z] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused" 
INFO[2020-01-07T05:31:07.693102497Z] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused" 
INFO[2020-01-07T05:31:08.694949604Z] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused" 
INFO[2020-01-07T05:31:09.932591988Z] Connecting to proxy                           url="wss://192.168.1.80:6443/v1-k3s/connect"
INFO[2020-01-07T05:31:10.066665951Z] Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/1bb4c288d71e7cc82ab5b8bf4b45cdceeef71d00f5bf24fefe52d1cfc2944d3b/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=pynq0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/systemd/user.slice/user-1000.slice --node-labels= --read-only-port=0 --resolv-conf=/etc/resolv.conf --runtime-cgroups=/systemd/user.slice/user-1000.slice --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key 
INFO[2020-01-07T05:31:10.095630515Z] Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=pynq0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables 
W0107 05:31:10.097487    2853 server.go:208] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
I0107 05:31:10.194755    2853 server.go:406] Version: v1.16.3-k3s.2
W0107 05:31:10.255805    2853 proxier.go:597] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
I0107 05:31:10.351165    2853 flannel.go:91] Determining IP address of default interface
I0107 05:31:10.365027    2853 flannel.go:104] Using interface with name eth0 and address 192.168.1.248
E0107 05:31:10.380651    2853 machine.go:288] failed to get cache information for node 0: open /sys/devices/system/cpu/cpu0/cache: no such file or directory
INFO[2020-01-07T05:31:10.385836462Z] addresses labels has already been set successfully on node: pynq0 
W0107 05:31:10.362271    2853 proxier.go:597] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
I0107 05:31:10.401012    2853 server.go:637] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
I0107 05:31:10.429811    2853 container_manager_linux.go:272] container manager verified user specified cgroup-root exists: []
I0107 05:31:10.426268    2853 kube.go:117] Waiting 10m0s for node controller to sync
I0107 05:31:10.426544    2853 kube.go:300] Starting kube subnet manager
I0107 05:31:10.431007    2853 container_manager_linux.go:277] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/systemd/user.slice/user-1000.slice SystemCgroupsName: KubeletCgroupsName:/systemd/user.slice/user-1000.slice ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
I0107 05:31:10.475979    2853 fake_topology_manager.go:29] [fake topologymanager] NewFakeManager
I0107 05:31:10.480498    2853 container_manager_linux.go:312] Creating device plugin manager: true
I0107 05:31:10.481199    2853 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider:  &{kubelet.sock /var/lib/rancher/k3s/agent/kubelet/device-plugins/ map[] {0 0} <nil> {{} [0 0 0]} 0x213dc84 0x585bfb0 0x213e3ac map[] map[] map[] map[] map[] 0x6a99ac0 [] 0x585bfb0}
I0107 05:31:10.481942    2853 state_mem.go:36] [cpumanager] initializing new in-memory state store
I0107 05:31:10.482927    2853 state_mem.go:84] [cpumanager] updated default cpuset: ""
I0107 05:31:10.483770    2853 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
I0107 05:31:10.484269    2853 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider:  &{{0 0} 0x585bfb0 10000000000 0x6acbb80 <nil> <nil> <nil> <nil> map[]}
I0107 05:31:10.485238    2853 kubelet.go:312] Watching apiserver
W0107 05:31:10.618741    2853 proxier.go:597] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0107 05:31:10.861013    2853 proxier.go:597] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
I0107 05:31:11.023912    2853 kuberuntime_manager.go:207] Container runtime containerd initialized, version: v1.3.0-k3s.5, apiVersion: v1alpha2
I0107 05:31:11.103158    2853 server.go:1066] Started kubelet
W0107 05:31:11.027097    2853 proxier.go:597] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
I0107 05:31:11.150660    2853 server.go:145] Starting to listen on 0.0.0.0:10250
I0107 05:31:11.223421    2853 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
I0107 05:31:11.224310    2853 status_manager.go:156] Starting to sync pod status with apiserver
I0107 05:31:11.224908    2853 kubelet.go:1822] Starting kubelet main sync loop.
I0107 05:31:11.226723    2853 kubelet.go:1839] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0107 05:31:11.252404    2853 volume_manager.go:249] Starting Kubelet Volume Manager
I0107 05:31:11.281797    2853 desired_state_of_world_populator.go:131] Desired state populator starts to run
I0107 05:31:11.313165    2853 server.go:354] Adding debug handlers to kubelet server.
I0107 05:31:11.353774    2853 kubelet.go:1839] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
E0107 05:31:11.389388    2853 cri_stats_provider.go:375] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache.
E0107 05:31:11.393934    2853 kubelet.go:1302] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
E0107 05:31:11.417456    2853 kubelet_network_linux.go:53] Failed to ensure that nat chain KUBE-MARK-DROP exists: error creating chain "KUBE-MARK-DROP": exit status 3: modprobe: FATAL: Module ip_tables not found in directory /lib/modules/4.19.0-xilinx-v2019.1
iptables v1.8.3 (legacy): can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
I0107 05:31:11.489317    2853 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
E0107 05:31:11.503719    2853 kubelet.go:2267] node "pynq0" not found
I0107 05:31:11.621639    2853 kube.go:124] Node controller sync successful
I0107 05:31:11.671046    2853 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
I0107 05:31:11.652173    2853 kubelet.go:1839] skipping pod synchronization - container runtime status check may not have completed yet
E0107 05:31:11.843713    2853 kubelet.go:2267] node "pynq0" not found
I0107 05:31:11.902177    2853 kubelet_node_status.go:72] Attempting to register node pynq0
I0107 05:31:11.992234    2853 kuberuntime_manager.go:961] updating runtime config through cri with podcidr 10.42.1.0/24
FATA[2020-01-07T05:31:12.035386958Z] ipset v7.1: Cannot open session to kernel. 
segator commented 4 years ago

You dont have iptables in the kernel, which os os this?

El mar., 7 ene. 2020 6:43, Joris Bolsens notifications@github.com escribió:

I am running into this problem as well, however I am using k3s agent. Master runs fine (different hardware).

INFO[2020-01-07T05:31:05.082368999Z] Starting k3s agent v1.0.1 (e94a3c60) INFO[2020-01-07T05:31:05.083576112Z] module overlay was already loaded WARN[2020-01-07T05:31:05.113123581Z] failed to start nf_conntrack module WARN[2020-01-07T05:31:05.142718687Z] failed to start br_netfilter module WARN[2020-01-07T05:31:05.143606876Z] failed to write value 1 at /proc/sys/net/ipv6/conf/all/forwarding: open /proc/sys/net/ipv6/conf/all/forwarding: no such file or directory WARN[2020-01-07T05:31:05.144115137Z] failed to write value 1 at /proc/sys/net/bridge/bridge-nf-call-iptables: open /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory WARN[2020-01-07T05:31:05.144604616Z] failed to write value 1 at /proc/sys/net/bridge/bridge-nf-call-ip6tables: open /proc/sys/net/bridge/bridge-nf-call-ip6tables: no such file or directory INFO[2020-01-07T05:31:05.146224571Z] Running load balancer 127.0.0.1:34309 -> [192.168.1.80:6443] INFO[2020-01-07T05:31:06.682951837Z] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log INFO[2020-01-07T05:31:06.683324510Z] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd INFO[2020-01-07T05:31:06.690110701Z] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused" INFO[2020-01-07T05:31:07.693102497Z] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused" INFO[2020-01-07T05:31:08.694949604Z] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused" INFO[2020-01-07T05:31:09.932591988Z] Connecting to proxy url="wss://192.168.1.80:6443/v1-k3s/connect" INFO[2020-01-07T05:31:10.066665951Z] Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/1bb4c288d71e7cc82ab5b8bf4b45cdceeef71d00f5bf24fefe52d1cfc2944d3b/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=pynq0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/systemd/user.slice/user-1000.slice --node-labels= --read-only-port=0 --resolv-conf=/etc/resolv.conf --runtime-cgroups=/systemd/user.slice/user-1000.slice --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key INFO[2020-01-07T05:31:10.095630515Z] Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=pynq0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables W0107 05:31:10.097487 2853 server.go:208] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP. I0107 05:31:10.194755 2853 server.go:406] Version: v1.16.3-k3s.2 W0107 05:31:10.255805 2853 proxier.go:597] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules I0107 05:31:10.351165 2853 flannel.go:91] Determining IP address of default interface I0107 05:31:10.365027 2853 flannel.go:104] Using interface with name eth0 and address 192.168.1.248 E0107 05:31:10.380651 2853 machine.go:288] failed to get cache information for node 0: open /sys/devices/system/cpu/cpu0/cache: no such file or directory INFO[2020-01-07T05:31:10.385836462Z] addresses labels has already been set successfully on node: pynq0 W0107 05:31:10.362271 2853 proxier.go:597] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules I0107 05:31:10.401012 2853 server.go:637] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / I0107 05:31:10.429811 2853 container_manager_linux.go:272] container manager verified user specified cgroup-root exists: [] I0107 05:31:10.426268 2853 kube.go:117] Waiting 10m0s for node controller to sync I0107 05:31:10.426544 2853 kube.go:300] Starting kube subnet manager I0107 05:31:10.431007 2853 container_manager_linux.go:277] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/systemd/user.slice/user-1000.slice SystemCgroupsName: KubeletCgroupsName:/systemd/user.slice/user-1000.slice ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} I0107 05:31:10.475979 2853 fake_topology_manager.go:29] [fake topologymanager] NewFakeManager I0107 05:31:10.480498 2853 container_manager_linux.go:312] Creating device plugin manager: true I0107 05:31:10.481199 2853 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{kubelet.sock /var/lib/rancher/k3s/agent/kubelet/device-plugins/ map[] {0 0} {{} [0 0 0]} 0x213dc84 0x585bfb0 0x213e3ac map[] map[] map[] map[] map[] 0x6a99ac0 [] 0x585bfb0} I0107 05:31:10.481942 2853 state_mem.go:36] [cpumanager] initializing new in-memory state store I0107 05:31:10.482927 2853 state_mem.go:84] [cpumanager] updated default cpuset: "" I0107 05:31:10.483770 2853 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]" I0107 05:31:10.484269 2853 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{{0 0} 0x585bfb0 10000000000 0x6acbb80 map[]} I0107 05:31:10.485238 2853 kubelet.go:312] Watching apiserver W0107 05:31:10.618741 2853 proxier.go:597] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules W0107 05:31:10.861013 2853 proxier.go:597] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules I0107 05:31:11.023912 2853 kuberuntime_manager.go:207] Container runtime containerd initialized, version: v1.3.0-k3s.5, apiVersion: v1alpha2 I0107 05:31:11.103158 2853 server.go:1066] Started kubelet W0107 05:31:11.027097 2853 proxier.go:597] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules I0107 05:31:11.150660 2853 server.go:145] Starting to listen on 0.0.0.0:10250 I0107 05:31:11.223421 2853 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer I0107 05:31:11.224310 2853 status_manager.go:156] Starting to sync pod status with apiserver I0107 05:31:11.224908 2853 kubelet.go:1822] Starting kubelet main sync loop. I0107 05:31:11.226723 2853 kubelet.go:1839] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful] I0107 05:31:11.252404 2853 volume_manager.go:249] Starting Kubelet Volume Manager I0107 05:31:11.281797 2853 desired_state_of_world_populator.go:131] Desired state populator starts to run I0107 05:31:11.313165 2853 server.go:354] Adding debug handlers to kubelet server. I0107 05:31:11.353774 2853 kubelet.go:1839] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful] E0107 05:31:11.389388 2853 cri_stats_provider.go:375] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache. E0107 05:31:11.393934 2853 kubelet.go:1302] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem E0107 05:31:11.417456 2853 kubelet_network_linux.go:53] Failed to ensure that nat chain KUBE-MARK-DROP exists: error creating chain "KUBE-MARK-DROP": exit status 3: modprobe: FATAL: Module ip_tables not found in directory /lib/modules/4.19.0-xilinx-v2019.1 iptables v1.8.3 (legacy): can't initialize iptables table `nat': Table does not exist (do you need to insmod?) Perhaps iptables or your kernel needs to be upgraded. I0107 05:31:11.489317 2853 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach E0107 05:31:11.503719 2853 kubelet.go:2267] node "pynq0" not found I0107 05:31:11.621639 2853 kube.go:124] Node controller sync successful I0107 05:31:11.671046 2853 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false I0107 05:31:11.652173 2853 kubelet.go:1839] skipping pod synchronization - container runtime status check may not have completed yet E0107 05:31:11.843713 2853 kubelet.go:2267] node "pynq0" not found I0107 05:31:11.902177 2853 kubelet_node_status.go:72] Attempting to register node pynq0 I0107 05:31:11.992234 2853 kuberuntime_manager.go:961] updating runtime config through cri with podcidr 10.42.1.0/24 FATA[2020-01-07T05:31:12.035386958Z http://10.42.1.0/24FATA%5B2020-01-07T05:31:12.035386958Z] ipset v7.1: Cannot open session to kernel.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/rancher/k3s/issues/959?email_source=notifications&email_token=AAR5IYYUSLBMRWWFFUM2EGLQ4QI7PA5CNFSM4JFFKNKKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEIHYXPQ#issuecomment-571444158, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAR5IY4QHNO4HMRVDGPZ6GDQ4QI7PANCNFSM4JFFKNKA .

JorisBolsens commented 4 years ago

@segator I am using petalinux kernel, running on a Pynq-Z1 (Zynq SOC)

segator commented 4 years ago

@JorisBolsens you need to ensure kernel modules required by k8s are loaded, this is your problem. Module ip_tables not found in directory /lib/modules/4.19.0-xilinx-v2019.1 ipset and others..

I thing this OS is too much light to be able to run kubernetes. check the kubernetes preflight to see which modules you need.

JorisBolsens commented 4 years ago

@segator thanks, I will look at adding those.

perrywang commented 4 years ago

Got workaround with server start option --disable-network-policy . ipset is used by NetworkPolicy function. The most scenarios it can be disabled.

ShylajaDevadiga commented 4 years ago

Resolved in v1.17.2-alpha3+k3s1.