Closed jiayihu closed 3 years ago
I can confirm v1.19.5+k3s2 works too
Can you attach complete logs from both the server and agent? The short bits you've included don't capture sufficient information.
Sure, here is the complete log from the master (192.168.1.143 is the master ip, 192.168.1.120 and 192.168.1.142 are the agent IPs):
time="2020-12-28T14:42:37Z" level=info msg="Acquiring lock file /var/lib/rancher/k3s/data/.lock" time="2020-12-28T14:42:37Z" level=info msg="Preparing data dir /var/lib/rancher/k3s/data/477a397d5ed8de53c8cee46d5f565f81e93de8dadbff870dc27452b47238f6f5" time="2020-12-28T14:43:11.003368985Z" level=info msg="Starting k3s v1.20.0+k3s2 (2ea6b163)" time="2020-12-28T14:43:11.154329944Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" time="2020-12-28T14:43:11.154704939Z" level=info msg="Configuring database table schema and indexes, this may take a moment..." time="2020-12-28T14:43:11.224622198Z" level=info msg="Database tables and indexes are up to date" time="2020-12-28T14:43:11.234425425Z" level=info msg="Kine listening on unix://kine.sock" time="2020-12-28T14:43:11.383813327Z" level=info msg="certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1609166591: notBefore=2020-12-28 14:43:11 +0000 UTC notAfter=2021-12-28 14:43:11 +0000 UTC" time="2020-12-28T14:43:11.391429650Z" level=info msg="certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1609166591: notBefore=2020-12-28 14:43:11 +0000 UTC notAfter=2021-12-28 14:43:11 +0000 UTC" time="2020-12-28T14:43:11.398929829Z" level=info msg="certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1609166591: notBefore=2020-12-28 14:43:11 +0000 UTC notAfter=2021-12-28 14:43:11 +0000 UTC" time="2020-12-28T14:43:11.406429591Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-client-ca@1609166591: notBefore=2020-12-28 14:43:11 +0000 UTC notAfter=2021-12-28 14:43:11 +0000 UTC" time="2020-12-28T14:43:11.413907634Z" level=info msg="certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1609166591: notBefore=2020-12-28 14:43:11 +0000 UTC notAfter=2021-12-28 14:43:11 +0000 UTC" time="2020-12-28T14:43:11.420591832Z" level=info msg="certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1609166591: notBefore=2020-12-28 14:43:11 +0000 UTC notAfter=2021-12-28 14:43:11 +0000 UTC" time="2020-12-28T14:43:11.428363935Z" level=info msg="certificate CN=cloud-controller-manager signed by CN=k3s-client-ca@1609166591: notBefore=2020-12-28 14:43:11 +0000 UTC notAfter=2021-12-28 14:43:11 +0000 UTC" time="2020-12-28T14:43:11.440599687Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1609166591: notBefore=2020-12-28 14:43:11 +0000 UTC notAfter=2021-12-28 14:43:11 +0000 UTC" time="2020-12-28T14:43:11.452593463Z" level=info msg="certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1609166591: notBefore=2020-12-28 14:43:11 +0000 UTC notAfter=2021-12-28 14:43:11 +0000 UTC" time="2020-12-28T14:43:11.464302554Z" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1609166591: notBefore=2020-12-28 14:43:11 +0000 UTC notAfter=2021-12-28 14:43:11 +0000 UTC" time="2020-12-28T14:43:11.471040553Z" level=info msg="certificate CN=etcd-client signed by CN=etcd-server-ca@1609166591: notBefore=2020-12-28 14:43:11 +0000 UTC notAfter=2021-12-28 14:43:11 +0000 UTC" time="2020-12-28T14:43:11.482515011Z" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1609166591: notBefore=2020-12-28 14:43:11 +0000 UTC notAfter=2021-12-28 14:43:11 +0000 UTC" time="2020-12-28T14:43:15.852988977Z" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1609166591: notBefore=2020-12-28 14:43:11 +0000 UTC notAfter=2021-12-28 14:43:15 +0000 UTC" time="2020-12-28T14:43:15.856263420Z" level=info msg="Active TLS secret (ver=) (count 7): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-192.168.1.143:192.168.1.143 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=84EBE4AFF2CC7B2D8C65D2F56A239C8353322A8B]" time="2020-12-28T14:43:15.976087062Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --feature-gates=ServiceAccountIssuerDiscovery=false --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24. I1228 14:43:15.996444 3251 server.go:659] external host was not specified, using 192.168.1.143 I1228 14:43:16.000194 3251 server.go:196] Version: v1.20.0+k3s2 I1228 14:43:21.593672 3251 shared_informer.go:240] Waiting for caches to sync for node_authorizer I1228 14:43:21.606482 3251 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I1228 14:43:21.606608 3251 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I1228 14:43:21.618450 3251 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I1228 14:43:21.618590 3251 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I1228 14:43:21.875103 3251 instance.go:289] Using reconciler: lease I1228 14:43:22.123003 3251 rest.go:131] the default service ipfamily for this cluster is: IPv4 W1228 14:43:25.492592 3251 genericapiserver.go:419] Skipping API batch/v2alpha1 because it has no resources. W1228 14:43:25.641620 3251 genericapiserver.go:419] Skipping API discovery.k8s.io/v1alpha1 because it has no resources. W1228 14:43:25.805934 3251 genericapiserver.go:419] Skipping API node.k8s.io/v1alpha1 because it has no resources. W1228 14:43:25.930585 3251 genericapiserver.go:419] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W1228 14:43:25.984999 3251 genericapiserver.go:419] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. W1228 14:43:26.135266 3251 genericapiserver.go:419] Skipping API storage.k8s.io/v1alpha1 because it has no resources. W1228 14:43:26.204074 3251 genericapiserver.go:419] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources. W1228 14:43:26.273525 3251 genericapiserver.go:419] Skipping API apps/v1beta2 because it has no resources. W1228 14:43:26.273647 3251 genericapiserver.go:419] Skipping API apps/v1beta1 because it has no resources. I1228 14:43:26.388294 3251 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I1228 14:43:26.388464 3251 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. time="2020-12-28T14:43:26.475067388Z" level=info msg="Waiting for API server to become available" time="2020-12-28T14:43:26.475209418Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0" time="2020-12-28T14:43:26.480369257Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true" time="2020-12-28T14:43:26.578278190Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token" time="2020-12-28T14:43:26.579648904Z" level=info msg="To join node to cluster: k3s agent -s https://192.168.1.143:6443 -t ${NODE_TOKEN}" time="2020-12-28T14:43:26.591050915Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml" time="2020-12-28T14:43:26.591260340Z" level=info msg="Run: k3s kubectl" I1228 14:43:36.587506 3251 trace.go:205] Trace[1881991231]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (28-Dec-2020 14:43:26.571) (total time: 10015ms): Trace[1881991231]: [10.015989082s] [10.015989082s] END E1228 14:43:36.587685 3251 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://127.0.0.1:6444/api/v1/nodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout time="2020-12-28T14:43:36.611011204Z" level=info msg="Waiting for cloudcontroller rbac role to be created" time="2020-12-28T14:43:47.672662791Z" level=info msg="Waiting for cloudcontroller rbac role to be created" I1228 14:43:47.738329 3251 trace.go:205] Trace[1754228043]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (28-Dec-2020 14:43:37.735) (total time: 10002ms): Trace[1754228043]: [10.002913026s] [10.002913026s] END E1228 14:43:47.738475 3251 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://127.0.0.1:6444/api/v1/nodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout I1228 14:43:55.212778 3251 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt I1228 14:43:55.212857 3251 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt I1228 14:43:55.215129 3251 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key I1228 14:43:55.221267 3251 secure_serving.go:197] Serving securely on 127.0.0.1:6444 I1228 14:43:55.221413 3251 tlsconfig.go:240] Starting DynamicServingCertificateController I1228 14:43:55.221683 3251 controller.go:83] Starting OpenAPI AggregationController I1228 14:43:55.223523 3251 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key I1228 14:43:55.224012 3251 apiservice_controller.go:97] Starting APIServiceRegistrationController I1228 14:43:55.224090 3251 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I1228 14:43:55.226026 3251 available_controller.go:475] Starting AvailableConditionController I1228 14:43:55.226145 3251 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I1228 14:43:55.227159 3251 autoregister_controller.go:141] Starting autoregister controller I1228 14:43:55.227271 3251 cache.go:32] Waiting for caches to sync for autoregister controller I1228 14:43:55.231487 3251 crdregistration_controller.go:111] Starting crd-autoregister controller I1228 14:43:55.231635 3251 shared_informer.go:240] Waiting for caches to sync for crd-autoregister I1228 14:43:55.232405 3251 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I1228 14:43:55.232498 3251 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller I1228 14:43:55.241992 3251 apf_controller.go:249] Starting API Priority and Fairness config controller I1228 14:43:55.242275 3251 controller.go:86] Starting OpenAPI controller I1228 14:43:55.242415 3251 naming_controller.go:291] Starting NamingConditionController I1228 14:43:55.242531 3251 establishing_controller.go:76] Starting EstablishingController I1228 14:43:55.242666 3251 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I1228 14:43:55.242831 3251 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I1228 14:43:55.242962 3251 crd_finalizer.go:266] Starting CRDFinalizer I1228 14:43:55.243101 3251 customresource_discovery_controller.go:209] Starting DiscoveryController I1228 14:43:55.279851 3251 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt I1228 14:43:55.280200 3251 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt I1228 14:43:56.211868 3251 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I1228 14:43:56.212569 3251 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). E1228 14:43:56.424699 3251 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "cloud-controller-manager" cannot list resource "nodes" in API group "" at the cluster scope time="2020-12-28T14:43:56.485342453Z" level=info msg="Waiting for cloudcontroller rbac role to be created" I1228 14:43:56.494899 3251 shared_informer.go:247] Caches are synced for node_authorizer I1228 14:43:56.529648 3251 cache.go:39] Caches are synced for autoregister controller I1228 14:43:56.533774 3251 shared_informer.go:247] Caches are synced for crd-autoregister I1228 14:43:56.543322 3251 apf_controller.go:253] Running API Priority and Fairness config worker I1228 14:43:56.545145 3251 cache.go:39] Caches are synced for APIServiceRegistrationController controller I1228 14:43:56.915284 3251 cache.go:39] Caches are synced for AvailableConditionController controller I1228 14:43:56.919344 3251 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I1228 14:43:56.926424 3251 controller.go:606] quota admission added evaluator for: namespaces I1228 14:43:56.973601 3251 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000 I1228 14:43:57.130253 3251 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000 I1228 14:43:57.130421 3251 storage_scheduling.go:148] all system priority classes are created successfully or already exist. E1228 14:43:57.233030 3251 controller.go:151] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.43.0.1"}: failed to allocated ip:10.43.0.1 with error:cannot allocate resources of type serviceipallocations at this time E1228 14:43:57.239575 3251 controller.go:156] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.1.143, ResourceVersion: 0, AdditionalErrorMsg: time="2020-12-28T14:43:57.564253849Z" level=info msg="Waiting for cloudcontroller rbac role to be created" time="2020-12-28T14:43:58.608791922Z" level=info msg="Waiting for cloudcontroller rbac role to be created" time="2020-12-28T14:43:59.653161873Z" level=info msg="Waiting for cloudcontroller rbac role to be created" time="2020-12-28T14:44:00.696197046Z" level=info msg="Waiting for cloudcontroller rbac role to be created" I1228 14:44:00.813789 3251 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io E1228 14:44:01.077280 3251 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "cloud-controller-manager" cannot list resource "nodes" in API group "" at the cluster scope I1228 14:44:01.198474 3251 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io time="2020-12-28T14:44:01.494312550Z" level=info msg="Kube API server is now running" time="2020-12-28T14:44:01.494516975Z" level=info msg="k3s is up and running" Flag --address has been deprecated, see --bind-address instead. W1228 14:44:01.521266 3251 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.1.143] I1228 14:44:01.530685 3251 controller.go:606] quota admission added evaluator for: endpoints I1228 14:44:01.560524 3251 controllermanager.go:176] Version: v1.20.0+k3s2 I1228 14:44:01.569617 3251 deprecated_insecure_serving.go:53] Serving insecurely on 127.0.0.1:10252 I1228 14:44:01.578711 3251 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io time="2020-12-28T14:44:01.585948997Z" level=info msg="Creating CRD addons.k3s.cattle.io" time="2020-12-28T14:44:01.628578219Z" level=info msg="Creating CRD helmcharts.helm.cattle.io" W1228 14:44:01.740138 3251 authorization.go:47] Authorization is disabled W1228 14:44:01.740230 3251 authentication.go:40] Authentication is disabled I1228 14:44:01.740303 3251 deprecated_insecure_serving.go:51] Serving healthz insecurely on 127.0.0.1:10251 time="2020-12-28T14:44:01.802888399Z" level=info msg="Creating CRD helmchartconfigs.helm.cattle.io" time="2020-12-28T14:44:01.808542400Z" level=info msg="Waiting for cloudcontroller rbac role to be created" time="2020-12-28T14:44:01.912979541Z" level=info msg="Waiting for CRD addons.k3s.cattle.io to become available" time="2020-12-28T14:44:02.437779175Z" level=info msg="Done waiting for CRD addons.k3s.cattle.io to become available" time="2020-12-28T14:44:02.437953444Z" level=info msg="Waiting for CRD helmcharts.helm.cattle.io to become available" time="2020-12-28T14:44:02.849642126Z" level=info msg="Waiting for cloudcontroller rbac role to be created" time="2020-12-28T14:44:02.956673822Z" level=info msg="Done waiting for CRD helmcharts.helm.cattle.io to become available" time="2020-12-28T14:44:02.956881632Z" level=info msg="Waiting for CRD helmchartconfigs.helm.cattle.io to become available" time="2020-12-28T14:44:03.476124191Z" level=info msg="Done waiting for CRD helmchartconfigs.helm.cattle.io to become available" I1228 14:44:03.559440 3251 request.go:655] Throttling request took 1.034856148s, request: GET:https://127.0.0.1:6444/apis/extensions/v1beta1?timeout=32s time="2020-12-28T14:44:03.576318462Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.81.0.tgz" time="2020-12-28T14:44:03.579103952Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml" time="2020-12-28T14:44:03.580545603Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml" time="2020-12-28T14:44:03.581518561Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml" time="2020-12-28T14:44:03.582514957Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml" time="2020-12-28T14:44:03.583351406Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml" time="2020-12-28T14:44:03.584340406Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml" time="2020-12-28T14:44:03.585300031Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml" time="2020-12-28T14:44:03.586579808Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml" time="2020-12-28T14:44:03.587856982Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml" time="2020-12-28T14:44:03.588681296Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml" time="2020-12-28T14:44:03.589613421Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml" time="2020-12-28T14:44:03.590526953Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml" time="2020-12-28T14:44:03.909914715Z" level=info msg="Waiting for cloudcontroller rbac role to be created" time="2020-12-28T14:44:03.994831913Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller" time="2020-12-28T14:44:04.004714097Z" level=info msg="Starting /v1, Kind=Secret controller" time="2020-12-28T14:44:04.252918626Z" level=info msg="Active TLS secret k3s-serving (ver=215) (count 7): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-192.168.1.143:192.168.1.143 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=84EBE4AFF2CC7B2D8C65D2F56A239C8353322A8B]" time="2020-12-28T14:44:04.571508272Z" level=info msg="Cluster dns configmap has been set successfully" I1228 14:44:04.700051 3251 shared_informer.go:240] Waiting for caches to sync for tokens I1228 14:44:04.807123 3251 shared_informer.go:247] Caches are synced for tokens time="2020-12-28T14:44:04.834136030Z" level=info msg="Starting /v1, Kind=Endpoints controller" time="2020-12-28T14:44:04.834833574Z" level=info msg="Starting /v1, Kind=Node controller" time="2020-12-28T14:44:04.835107738Z" level=info msg="Starting /v1, Kind=Service controller" time="2020-12-28T14:44:04.835432630Z" level=info msg="Starting /v1, Kind=Pod controller" I1228 14:44:04.935505 3251 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io I1228 14:44:04.940252 3251 controller.go:606] quota admission added evaluator for: serviceaccounts time="2020-12-28T14:44:05.010395303Z" level=info msg="Running cloud-controller-manager for provider k3s with args --profiling=false" I1228 14:44:05.011139 3251 controllermanager.go:127] Version: v1.20.0+k3s2 time="2020-12-28T14:44:05.203045231Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChartConfig controller" time="2020-12-28T14:44:05.203387414Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller" time="2020-12-28T14:44:05.203621214Z" level=info msg="Starting batch/v1, Kind=Job controller" I1228 14:44:05.601469 3251 controllermanager.go:554] Started "namespace" I1228 14:44:05.602595 3251 namespace_controller.go:200] Starting namespace controller I1228 14:44:05.602695 3251 shared_informer.go:240] Waiting for caches to sync for namespace I1228 14:44:05.675423 3251 controllermanager.go:554] Started "csrcleaner" I1228 14:44:05.676601 3251 cleaner.go:82] Starting CSR cleaner controller I1228 14:44:05.737457 3251 controller.go:606] quota admission added evaluator for: deployments.apps I1228 14:44:05.795976 3251 controllermanager.go:554] Started "attachdetach" I1228 14:44:05.797198 3251 attach_detach_controller.go:328] Starting attach detach controller I1228 14:44:05.797308 3251 shared_informer.go:240] Waiting for caches to sync for attach detach I1228 14:44:05.823033 3251 node_controller.go:115] Sending events to api server. I1228 14:44:05.823848 3251 controllermanager.go:238] Started "cloud-node" I1228 14:44:05.838400 3251 node_controller.go:154] Waiting for informer caches to sync I1228 14:44:05.845332 3251 node_lifecycle_controller.go:77] Sending events to api server I1228 14:44:05.845634 3251 controllermanager.go:238] Started "cloud-node-lifecycle" I1228 14:44:05.931081 3251 controllermanager.go:554] Started "pv-protection" W1228 14:44:05.931251 3251 controllermanager.go:546] Skipping "ttl-after-finished" I1228 14:44:05.932975 3251 pv_protection_controller.go:83] Starting PV protection controller I1228 14:44:05.933083 3251 shared_informer.go:240] Waiting for caches to sync for PV protection I1228 14:44:06.044247 3251 controllermanager.go:554] Started "root-ca-cert-publisher" I1228 14:44:06.046582 3251 publisher.go:98] Starting root CA certificate configmap publisher I1228 14:44:06.046693 3251 shared_informer.go:240] Waiting for caches to sync for crt configmap I1228 14:44:06.195133 3251 controllermanager.go:554] Started "endpoint" I1228 14:44:06.196151 3251 endpoints_controller.go:184] Starting endpoint controller I1228 14:44:06.196248 3251 shared_informer.go:240] Waiting for caches to sync for endpoint I1228 14:44:06.248149 3251 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-serving" I1228 14:44:06.248376 3251 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving I1228 14:44:06.256552 3251 controllermanager.go:554] Started "csrsigning" I1228 14:44:06.258163 3251 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key I1228 14:44:06.258456 3251 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-client" I1228 14:44:06.258526 3251 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client I1228 14:44:06.258699 3251 certificate_controller.go:118] Starting certificate controller "csrsigning-kube-apiserver-client" I1228 14:44:06.258830 3251 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client I1228 14:44:06.259061 3251 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown" I1228 14:44:06.259119 3251 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown I1228 14:44:06.262277 3251 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key I1228 14:44:06.262542 3251 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key I1228 14:44:06.262697 3251 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key I1228 14:44:06.330573 3251 node_ipam_controller.go:91] Sending events to api server. I1228 14:44:09.001632 3251 controller.go:606] quota admission added evaluator for: helmcharts.helm.cattle.io I1228 14:44:09.231272 3251 controller.go:606] quota admission added evaluator for: jobs.batch I1228 14:44:16.841826 3251 range_allocator.go:82] Sending events to api server. I1228 14:44:16.843159 3251 range_allocator.go:110] No Service CIDR provided. Skipping filtering out service addresses. I1228 14:44:16.843262 3251 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses. I1228 14:44:16.843498 3251 controllermanager.go:554] Started "nodeipam" I1228 14:44:16.843674 3251 node_ipam_controller.go:159] Starting ipam controller I1228 14:44:16.843764 3251 shared_informer.go:240] Waiting for caches to sync for node I1228 14:44:16.914309 3251 controllermanager.go:554] Started "pvc-protection" I1228 14:44:16.914497 3251 pvc_protection_controller.go:110] Starting PVC protection controller I1228 14:44:16.914566 3251 shared_informer.go:240] Waiting for caches to sync for PVC protection I1228 14:44:16.987150 3251 controllermanager.go:554] Started "statefulset" I1228 14:44:16.988266 3251 stateful_set.go:146] Starting stateful set controller I1228 14:44:16.988370 3251 shared_informer.go:240] Waiting for caches to sync for stateful set I1228 14:44:17.018362 3251 node_lifecycle_controller.go:380] Sending events to api server. I1228 14:44:17.019652 3251 taint_manager.go:163] Sending events to api server. I1228 14:44:17.020328 3251 node_lifecycle_controller.go:508] Controller will reconcile labels. I1228 14:44:17.020591 3251 controllermanager.go:554] Started "nodelifecycle" I1228 14:44:17.022062 3251 node_lifecycle_controller.go:542] Starting node controller I1228 14:44:17.022180 3251 shared_informer.go:240] Waiting for caches to sync for taint I1228 14:44:17.096153 3251 controllermanager.go:554] Started "persistentvolume-expander" I1228 14:44:17.097353 3251 expand_controller.go:310] Starting expand controller I1228 14:44:17.097465 3251 shared_informer.go:240] Waiting for caches to sync for expand I1228 14:44:17.170453 3251 controllermanager.go:554] Started "endpointslicemirroring" I1228 14:44:17.171720 3251 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller I1228 14:44:17.172030 3251 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring I1228 14:44:18.386230 3251 controllermanager.go:554] Started "replicationcontroller" I1228 14:44:18.387449 3251 replica_set.go:182] Starting replicationcontroller controller I1228 14:44:18.387562 3251 shared_informer.go:240] Waiting for caches to sync for ReplicationController I1228 14:44:18.466985 3251 controllermanager.go:554] Started "podgc" I1228 14:44:18.467913 3251 gc_controller.go:89] Starting GC controller I1228 14:44:18.468024 3251 shared_informer.go:240] Waiting for caches to sync for GC I1228 14:44:18.581905 3251 controllermanager.go:554] Started "garbagecollector" I1228 14:44:18.584623 3251 garbagecollector.go:142] Starting garbage collector controller I1228 14:44:18.584810 3251 shared_informer.go:240] Waiting for caches to sync for garbage collector I1228 14:44:18.584979 3251 graph_builder.go:289] GraphBuilder running I1228 14:44:18.670845 3251 controllermanager.go:554] Started "replicaset" W1228 14:44:18.670991 3251 controllermanager.go:533] "bootstrapsigner" is disabled W1228 14:44:18.671042 3251 controllermanager.go:533] "service" is disabled I1228 14:44:18.672054 3251 replica_set.go:182] Starting replicaset controller I1228 14:44:18.672168 3251 shared_informer.go:240] Waiting for caches to sync for ReplicaSet I1228 14:44:18.805718 3251 controllermanager.go:554] Started "clusterrole-aggregation" I1228 14:44:18.807223 3251 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator I1228 14:44:18.807331 3251 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator I1228 14:44:18.888063 3251 controllermanager.go:554] Started "endpointslice" I1228 14:44:18.889404 3251 endpointslice_controller.go:237] Starting endpoint slice controller I1228 14:44:18.889521 3251 shared_informer.go:240] Waiting for caches to sync for endpoint_slice I1228 14:44:19.047061 3251 controllermanager.go:554] Started "horizontalpodautoscaling" I1228 14:44:19.047184 3251 horizontal.go:169] Starting HPA controller W1228 14:44:19.047214 3251 controllermanager.go:533] "route" is disabled I1228 14:44:19.047242 3251 shared_informer.go:240] Waiting for caches to sync for HPA I1228 14:44:19.076328 3251 controllermanager.go:554] Started "csrapproving" I1228 14:44:19.077508 3251 certificate_controller.go:118] Starting certificate controller "csrapproving" I1228 14:44:19.077648 3251 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving I1228 14:44:19.148837 3251 controllermanager.go:554] Started "ttl" W1228 14:44:19.148992 3251 controllermanager.go:533] "cloud-node-lifecycle" is disabled I1228 14:44:19.149009 3251 ttl_controller.go:121] Starting TTL controller W1228 14:44:19.149085 3251 controllermanager.go:546] Skipping "ephemeral-volume" I1228 14:44:19.149092 3251 shared_informer.go:240] Waiting for caches to sync for TTL E1228 14:44:19.884130 3251 resource_quota_controller.go:162] initial discovery check failure, continuing and counting on future sync update: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server could not find the requested resource I1228 14:44:19.884652 3251 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io I1228 14:44:19.884996 3251 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for helmcharts.helm.cattle.io I1228 14:44:19.885179 3251 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for controllerrevisions.apps I1228 14:44:19.885370 3251 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io I1228 14:44:19.885604 3251 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io I1228 14:44:19.885839 3251 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for addons.k3s.cattle.io W1228 14:44:19.886101 3251 shared_informer.go:494] resyncPeriod 17h46m55.309624846s is smaller than resyncCheckPeriod 22h59m42.427725843s and the informer has already started. Changing it to 22h59m42.427725843s I1228 14:44:19.886440 3251 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for serviceaccounts W1228 14:44:19.886647 3251 shared_informer.go:494] resyncPeriod 13h13m23.997785663s is smaller than resyncCheckPeriod 22h59m42.427725843s and the informer has already started. Changing it to 22h59m42.427725843s I1228 14:44:19.887112 3251 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for deployments.apps I1228 14:44:19.887272 3251 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling I1228 14:44:19.887469 3251 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy I1228 14:44:19.887648 3251 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpoints I1228 14:44:19.887868 3251 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for daemonsets.apps I1228 14:44:19.888023 3251 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for replicasets.apps I1228 14:44:19.888230 3251 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for podtemplates I1228 14:44:19.888484 3251 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for statefulsets.apps I1228 14:44:19.888757 3251 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for events.events.k8s.io I1228 14:44:19.888967 3251 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io I1228 14:44:19.889200 3251 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for helmchartconfigs.helm.cattle.io I1228 14:44:19.889382 3251 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for limitranges I1228 14:44:19.889630 3251 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for jobs.batch I1228 14:44:19.889881 3251 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for cronjobs.batch I1228 14:44:19.890103 3251 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.extensions I1228 14:44:19.890263 3251 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for leases.coordination.k8s.io I1228 14:44:19.890697 3251 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io I1228 14:44:19.891236 3251 controllermanager.go:554] Started "resourcequota" I1228 14:44:19.891295 3251 resource_quota_controller.go:273] Starting resource quota controller I1228 14:44:19.891360 3251 shared_informer.go:240] Waiting for caches to sync for resource quota I1228 14:44:19.891509 3251 resource_quota_monitor.go:304] QuotaMonitor running I1228 14:44:19.970473 3251 controllermanager.go:554] Started "serviceaccount" I1228 14:44:19.972126 3251 serviceaccounts_controller.go:117] Starting service account controller I1228 14:44:19.972278 3251 shared_informer.go:240] Waiting for caches to sync for service account I1228 14:44:20.081587 3251 controllermanager.go:554] Started "disruption" I1228 14:44:20.082589 3251 disruption.go:331] Starting disruption controller I1228 14:44:20.082681 3251 shared_informer.go:240] Waiting for caches to sync for disruption I1228 14:44:20.151850 3251 controllermanager.go:554] Started "cronjob" I1228 14:44:20.153046 3251 cronjob_controller.go:96] Starting CronJob Manager I1228 14:44:20.248710 3251 controllermanager.go:554] Started "daemonset" I1228 14:44:20.250055 3251 daemon_controller.go:285] Starting daemon sets controller I1228 14:44:20.250168 3251 shared_informer.go:240] Waiting for caches to sync for daemon sets I1228 14:44:20.365958 3251 controllermanager.go:554] Started "job" W1228 14:44:20.366099 3251 controllermanager.go:533] "tokencleaner" is disabled I1228 14:44:20.367360 3251 job_controller.go:148] Starting job controller I1228 14:44:20.367451 3251 shared_informer.go:240] Waiting for caches to sync for job I1228 14:44:20.444524 3251 request.go:655] Throttling request took 1.055129989s, request: GET:https://127.0.0.1:6444/apis/authorization.k8s.io/v1beta1?timeout=32s I1228 14:44:20.446945 3251 controllermanager.go:554] Started "deployment" I1228 14:44:20.448259 3251 deployment_controller.go:153] Starting deployment controller I1228 14:44:20.448361 3251 shared_informer.go:240] Waiting for caches to sync for deployment I1228 14:44:20.521266 3251 controllermanager.go:554] Started "persistentvolume-binder" I1228 14:44:20.521989 3251 pv_controller_base.go:307] Starting persistent volume controller I1228 14:44:20.522094 3251 shared_informer.go:240] Waiting for caches to sync for persistent volume I1228 14:44:20.748591 3251 shared_informer.go:247] Caches are synced for node I1228 14:44:20.748796 3251 range_allocator.go:172] Starting range CIDR allocator I1228 14:44:20.748844 3251 shared_informer.go:240] Waiting for caches to sync for cidrallocator I1228 14:44:20.748892 3251 shared_informer.go:247] Caches are synced for cidrallocator I1228 14:44:20.752787 3251 shared_informer.go:247] Caches are synced for daemon sets I1228 14:44:20.761872 3251 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I1228 14:44:20.762087 3251 shared_informer.go:247] Caches are synced for crt configmap I1228 14:44:20.763254 3251 shared_informer.go:247] Caches are synced for HPA I1228 14:44:20.763482 3251 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving I1228 14:44:20.763592 3251 shared_informer.go:247] Caches are synced for TTL I1228 14:44:20.764235 3251 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client I1228 14:44:20.764417 3251 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client I1228 14:44:20.768412 3251 shared_informer.go:247] Caches are synced for GC I1228 14:44:20.769072 3251 shared_informer.go:247] Caches are synced for job I1228 14:44:20.772921 3251 shared_informer.go:247] Caches are synced for service account I1228 14:44:20.773357 3251 shared_informer.go:247] Caches are synced for ReplicaSet I1228 14:44:20.788023 3251 shared_informer.go:247] Caches are synced for ReplicationController I1228 14:44:20.791253 3251 shared_informer.go:247] Caches are synced for stateful set I1228 14:44:20.791906 3251 shared_informer.go:247] Caches are synced for certificate-csrapproving I1228 14:44:20.798162 3251 shared_informer.go:247] Caches are synced for expand I1228 14:44:20.809930 3251 shared_informer.go:247] Caches are synced for namespace I1228 14:44:20.830263 3251 shared_informer.go:247] Caches are synced for taint I1228 14:44:20.830786 3251 shared_informer.go:247] Caches are synced for PVC protection I1228 14:44:20.830929 3251 shared_informer.go:247] Caches are synced for persistent volume I1228 14:44:20.831639 3251 taint_manager.go:187] Starting NoExecuteTaintManager I1228 14:44:20.835133 3251 shared_informer.go:247] Caches are synced for PV protection I1228 14:44:20.848896 3251 shared_informer.go:247] Caches are synced for deployment I1228 14:44:20.876701 3251 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I1228 14:44:20.890666 3251 shared_informer.go:247] Caches are synced for endpoint_slice I1228 14:44:20.891678 3251 shared_informer.go:247] Caches are synced for disruption I1228 14:44:20.891989 3251 disruption.go:339] Sending events to api server. I1228 14:44:20.897203 3251 shared_informer.go:247] Caches are synced for endpoint I1228 14:44:20.993636 3251 shared_informer.go:247] Caches are synced for resource quota I1228 14:44:21.009658 3251 shared_informer.go:247] Caches are synced for attach detach I1228 14:44:21.024420 3251 shared_informer.go:247] Caches are synced for ClusterRoleAggregator I1228 14:44:21.218170 3251 controller.go:606] quota admission added evaluator for: replicasets.apps I1228 14:44:21.349007 3251 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-854c77959c to 1" I1228 14:44:21.356389 3251 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-86cbb8457f to 1" I1228 14:44:21.362404 3251 event.go:291] "Event occurred" object="kube-system/local-path-provisioner" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set local-path-provisioner-7c458769fb to 1" W1228 14:44:21.390527 3251 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server could not find the requested resource] I1228 14:44:21.617682 3251 event.go:291] "Event occurred" object="kube-system/helm-install-traefik" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: helm-install-traefik-p8j9z" I1228 14:44:21.649848 3251 controller.go:606] quota admission added evaluator for: events.events.k8s.io E1228 14:44:21.735962 3251 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request I1228 14:44:21.762914 3251 event.go:291] "Event occurred" object="kube-system/metrics-server-86cbb8457f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-86cbb8457f-lzn2w" I1228 14:44:21.830314 3251 event.go:291] "Event occurred" object="kube-system/coredns-854c77959c" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-854c77959c-r9zzq" I1228 14:44:21.830541 3251 event.go:291] "Event occurred" object="kube-system/local-path-provisioner-7c458769fb" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: local-path-provisioner-7c458769fb-kfl59" E1228 14:44:22.216569 3251 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request I1228 14:44:22.217455 3251 shared_informer.go:240] Waiting for caches to sync for resource quota I1228 14:44:22.217705 3251 shared_informer.go:247] Caches are synced for resource quota E1228 14:44:22.220623 3251 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again I1228 14:44:22.477309 3251 trace.go:205] Trace[1785101304]: "Update" url:/apis/apps/v1/namespaces/kube-system/deployments/local-path-provisioner/status,user-agent:k3s/v1.20.0+k3s2 (linux/arm64) kubernetes/2ea6b16/system:serviceaccount:kube-system:deployment-controller,client:127.0.0.1 (28-Dec-2020 14:44:21.966) (total time: 510ms): Trace[1785101304]: ---"Object stored in database" 378ms (14:44:00.433) Trace[1785101304]: [510.773693ms] [510.773693ms] END E1228 14:44:22.483025 3251 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request I1228 14:44:22.773182 3251 trace.go:205] Trace[1854015104]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (28-Dec-2020 14:44:22.021) (total time: 751ms): Trace[1854015104]: ---"initial value restored" 283ms (14:44:00.305) Trace[1854015104]: ---"Transaction committed" 344ms (14:44:00.772) Trace[1854015104]: [751.29565ms] [751.29565ms] END I1228 14:44:22.884920 3251 shared_informer.go:240] Waiting for caches to sync for garbage collector I1228 14:44:23.061681 3251 trace.go:205] Trace[874561485]: "Create" url:/apis/events.k8s.io/v1/namespaces/kube-system/events,user-agent:k3s/v1.20.0+k3s2 (linux/arm64) kubernetes/2ea6b16/scheduler,client:127.0.0.1 (28-Dec-2020 14:44:22.171) (total time: 889ms): Trace[874561485]: ---"Object stored in database" 888ms (14:44:00.060) Trace[874561485]: [889.92872ms] [889.92872ms] END I1228 14:44:23.226016 3251 shared_informer.go:247] Caches are synced for garbage collector I1228 14:44:23.293539 3251 shared_informer.go:247] Caches are synced for garbage collector I1228 14:44:23.293676 3251 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage W1228 14:44:29.448569 3251 handler_proxy.go:102] no RequestInfo found in the context E1228 14:44:29.448881 3251 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 14:44:29.448974 3251 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. time="2020-12-28T14:44:48.805623810Z" level=info msg="Cluster-Http-Server 2020/12/28 14:44:48 http: TLS handshake error from 192.168.1.142:53472: remote error: tls: bad certificate" time="2020-12-28T14:44:49.013771641Z" level=info msg="Cluster-Http-Server 2020/12/28 14:44:49 http: TLS handshake error from 192.168.1.142:53480: remote error: tls: bad certificate" time="2020-12-28T14:44:49.941123859Z" level=info msg="Cluster-Http-Server 2020/12/28 14:44:49 http: TLS handshake error from 192.168.1.120:37988: remote error: tls: bad certificate" time="2020-12-28T14:44:50.153304407Z" level=info msg="Cluster-Http-Server 2020/12/28 14:44:50 http: TLS handshake error from 192.168.1.120:38000: remote error: tls: bad certificate" time="2020-12-28T14:44:50.897309668Z" level=info msg="Cluster-Http-Server 2020/12/28 14:44:50 http: TLS handshake error from 192.168.1.142:53484: remote error: tls: bad certificate" time="2020-12-28T14:44:50.926953772Z" level=info msg="Cluster-Http-Server 2020/12/28 14:44:50 http: TLS handshake error from 192.168.1.142:53488: remote error: tls: bad certificate" time="2020-12-28T14:44:52.798956091Z" level=info msg="certificate CN=rasp-2 signed by CN=k3s-server-ca@1609166591: notBefore=2020-12-28 14:43:11 +0000 UTC notAfter=2021-12-28 14:44:52 +0000 UTC" time="2020-12-28T14:44:52.814434360Z" level=info msg="Cluster-Http-Server 2020/12/28 14:44:52 http: TLS handshake error from 192.168.1.142:53492: remote error: tls: bad certificate" time="2020-12-28T14:44:52.840271526Z" level=info msg="Cluster-Http-Server 2020/12/28 14:44:52 http: TLS handshake error from 192.168.1.142:53496: remote error: tls: bad certificate" E1228 14:44:53.250201 3251 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request time="2020-12-28T14:44:54.338800475Z" level=info msg="certificate CN=system:node:rasp-2,O=system:nodes signed by CN=k3s-client-ca@1609166591: notBefore=2020-12-28 14:43:11 +0000 UTC notAfter=2021-12-28 14:44:54 +0000 UTC" time="2020-12-28T14:44:54.720017771Z" level=info msg="Cluster-Http-Server 2020/12/28 14:44:54 http: TLS handshake error from 192.168.1.142:53500: remote error: tls: bad certificate" time="2020-12-28T14:44:54.747310338Z" level=info msg="Cluster-Http-Server 2020/12/28 14:44:54 http: TLS handshake error from 192.168.1.142:53504: remote error: tls: bad certificate" I1228 14:44:54.976906 3251 request.go:655] Throttling request took 1.041033112s, request: GET:https://127.0.0.1:6444/apis/autoscaling/v2beta2?timeout=32s W1228 14:44:55.935041 3251 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] time="2020-12-28T14:44:56.687814565Z" level=info msg="Cluster-Http-Server 2020/12/28 14:44:56 http: TLS handshake error from 192.168.1.142:53508: remote error: tls: bad certificate" time="2020-12-28T14:44:56.714025477Z" level=info msg="Cluster-Http-Server 2020/12/28 14:44:56 http: TLS handshake error from 192.168.1.142:53512: remote error: tls: bad certificate" time="2020-12-28T14:44:57.786825273Z" level=info msg="Handling backend connection request [rasp-2]" time="2020-12-28T14:44:58.586838621Z" level=info msg="Cluster-Http-Server 2020/12/28 14:44:58 http: TLS handshake error from 192.168.1.142:53516: remote error: tls: bad certificate" time="2020-12-28T14:44:58.614820920Z" level=info msg="Cluster-Http-Server 2020/12/28 14:44:58 http: TLS handshake error from 192.168.1.142:53520: remote error: tls: bad certificate"
This from one of the agents:
time="2020-12-28T14:44:15Z" level=info msg="Acquiring lock file /var/lib/rancher/k3s/data/.lock" time="2020-12-28T14:44:15Z" level=info msg="Preparing data dir /var/lib/rancher/k3s/data/477a397d5ed8de53c8cee46d5f565f81e93de8dadbff870dc27452b47238f6f5" time="2020-12-28T14:44:49.546906148Z" level=info msg="Starting k3s agent v1.20.0+k3s2 (2ea6b163)" time="2020-12-28T14:44:49.548244869Z" level=info msg="Module overlay was already loaded" time="2020-12-28T14:44:49.548455958Z" level=info msg="Module nf_conntrack was already loaded" time="2020-12-28T14:44:49.548556269Z" level=info msg="Module br_netfilter was already loaded" time="2020-12-28T14:44:49.548652204Z" level=info msg="Module iptable_nat was already loaded" time="2020-12-28T14:44:49.552556443Z" level=info msg="Running load balancer 127.0.0.1:35497 -> [192.168.1.143:6443]" time="2020-12-28T14:44:54.432936976Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log" time="2020-12-28T14:44:54.433485454Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd" time="2020-12-28T14:44:55.436567562Z" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused\"" time="2020-12-28T14:44:56.438367229Z" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused\"" time="2020-12-28T14:44:57.493386308Z" level=info msg="Containerd is now running" time="2020-12-28T14:44:57.744790052Z" level=info msg="Connecting to proxy" url="wss://192.168.1.143:6443/v1-k3s/connect" time="2020-12-28T14:44:57.793198600Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us" time="2020-12-28T14:44:57.793692342Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/477a397d5ed8de53c8cee46d5f565f81e93de8dadbff870dc27452b47238f6f5/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=unix:///run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=rasp-2 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/etc/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key" time="2020-12-28T14:44:57.799474704Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=rasp-2 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables" Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet. Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed. W1228 14:44:57.801816 3398 server.go:226] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP. I1228 14:44:57.805659 3398 server.go:412] Version: v1.20.0+k3s2 E1228 14:44:58.049789 3398 node.go:161] Failed to retrieve node info: nodes "rasp-2" not found time="2020-12-28T14:44:58.050448141Z" level=info msg="Waiting for node rasp-2: nodes \"rasp-2\" not found" I1228 14:44:58.119735 3398 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt E1228 14:44:59.206065 3398 node.go:161] Failed to retrieve node info: nodes "rasp-2" not found time="2020-12-28T14:45:00.149398549Z" level=info msg="Waiting for node rasp-2: nodes \"rasp-2\" not found" E1228 14:45:01.481905 3398 node.go:161] Failed to retrieve node info: nodes "rasp-2" not found time="2020-12-28T14:45:02.242046500Z" level=info msg="Waiting for node rasp-2: nodes \"rasp-2\" not found" W1228 14:45:03.132109 3398 nvidia.go:81] Error reading "/sys/bus/pci/devices/": open /sys/bus/pci/devices/: no such file or directory W1228 14:45:03.140114 3398 sysinfo.go:203] Nodes topology is not available, providing CPU topology W1228 14:45:03.141755 3398 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory W1228 14:45:03.141958 3398 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu1/online: open /sys/devices/system/cpu/cpu1/online: no such file or directory W1228 14:45:03.142147 3398 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu2/online: open /sys/devices/system/cpu/cpu2/online: no such file or directory W1228 14:45:03.142371 3398 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu3/online: open /sys/devices/system/cpu/cpu3/online: no such file or directory E1228 14:45:03.142719 3398 info.go:114] Failed to get system UUID: open /etc/machine-id: no such file or directory W1228 14:45:03.143598 3398 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu0 online state, skipping W1228 14:45:03.143733 3398 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu1 online state, skipping W1228 14:45:03.143860 3398 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu2 online state, skipping W1228 14:45:03.143999 3398 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu3 online state, skipping E1228 14:45:03.144113 3398 machine.go:72] Cannot read number of physical cores correctly, number of cores set to 0 W1228 14:45:03.144857 3398 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu0 online state, skipping W1228 14:45:03.145000 3398 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu1 online state, skipping W1228 14:45:03.145204 3398 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu2 online state, skipping W1228 14:45:03.145364 3398 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu3 online state, skipping E1228 14:45:03.145410 3398 machine.go:86] Cannot read number of sockets correctly, number of sockets set to 0 W1228 14:45:03.145605 3398 info.go:53] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id" I1228 14:45:03.148206 3398 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / I1228 14:45:03.150004 3398 container_manager_linux.go:287] container manager verified user specified cgroup-root exists: [] I1228 14:45:03.150189 3398 container_manager_linux.go:292] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:Percentage:0.05} GracePeriod:0s MinReclaim: } {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim: }]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:false CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none Rootless:false} I1228 14:45:03.150832 3398 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope I1228 14:45:03.150938 3398 container_manager_linux.go:323] [topologymanager] Initializing Topology Manager with none policy and container-level scope I1228 14:45:03.151008 3398 container_manager_linux.go:328] Creating device plugin manager: true I1228 14:45:03.152867 3398 kubelet.go:262] Adding pod path: /var/lib/rancher/k3s/agent/pod-manifests I1228 14:45:03.153234 3398 kubelet.go:273] Watching apiserver I1228 14:45:03.163167 3398 kuberuntime_manager.go:216] Container runtime containerd initialized, version: v1.4.3-k3s1, apiVersion: v1alpha2 I1228 14:45:03.167097 3398 server.go:1177] Started kubelet I1228 14:45:03.175364 3398 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer I1228 14:45:03.177786 3398 server.go:148] Starting to listen on 0.0.0.0:10250 E1228 14:45:03.179632 3398 cri_stats_provider.go:376] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.native": unable to find data in memory cache. E1228 14:45:03.179890 3398 kubelet.go:1271] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem I1228 14:45:03.185486 3398 server.go:409] Adding debug handlers to kubelet server. I1228 14:45:03.188843 3398 volume_manager.go:271] Starting Kubelet Volume Manager I1228 14:45:03.193057 3398 desired_state_of_world_populator.go:142] Desired state populator starts to run E1228 14:45:03.335769 3398 kubelet.go:2240] node "rasp-2" not found I1228 14:45:03.359883 3398 kubelet_node_status.go:71] Attempting to register node rasp-2 E1228 14:45:03.436470 3398 kubelet.go:2240] node "rasp-2" not found I1228 14:45:03.484514 3398 kubelet_network_linux.go:56] Initialized IPv4 iptables rules. I1228 14:45:03.484758 3398 status_manager.go:158] Starting to sync pod status with apiserver I1228 14:45:03.485102 3398 kubelet.go:1799] Starting kubelet main sync loop. E1228 14:45:03.485434 3398 kubelet.go:1823] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful] I1228 14:45:03.525875 3398 cpu_manager.go:193] [cpumanager] starting with none policy I1228 14:45:03.525977 3398 cpu_manager.go:194] [cpumanager] reconciling every 10s I1228 14:45:03.526219 3398 state_mem.go:36] [cpumanager] initializing new in-memory state store I1228 14:45:03.527635 3398 policy_none.go:43] [cpumanager] none policy: Start W1228 14:45:03.532471 3398 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found I1228 14:45:03.534064 3398 plugin_manager.go:114] Starting Kubelet Plugin Manager E1228 14:45:03.535887 3398 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: node "rasp-2" not found E1228 14:45:03.537071 3398 kubelet.go:2240] node "rasp-2" not found E1228 14:45:03.637345 3398 kubelet.go:2240] node "rasp-2" not found I1228 14:45:03.737643 3398 reconciler.go:157] Reconciler: start to sync state I1228 14:45:03.747140 3398 kubelet_node_status.go:74] Successfully registered node rasp-2 I1228 14:45:03.937991 3398 kuberuntime_manager.go:1006] updating runtime config through cri with podcidr 10.42.0.0/24 I1228 14:45:03.940533 3398 kubelet_network.go:77] Setting Pod CIDR: -> 10.42.0.0/24 time="2020-12-28T14:45:04.294184691Z" level=info msg="Node CIDR assigned for: rasp-2" I1228 14:45:04.294715 3398 flannel.go:92] Determining IP address of default interface I1228 14:45:04.297207 3398 flannel.go:105] Using interface with name wlan0 and address 192.168.1.120 I1228 14:45:04.324520 3398 kube.go:117] Waiting 10m0s for node controller to sync I1228 14:45:04.324702 3398 kube.go:300] Starting kube subnet manager time="2020-12-28T14:45:04.396426340Z" level=info msg="labels have been set successfully on node: rasp-2" I1228 14:45:04.755766 3398 network_policy_controller.go:149] Starting network policy controller I1228 14:45:05.325001 3398 kube.go:124] Node controller sync successful I1228 14:45:05.325556 3398 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false I1228 14:45:05.489488 3398 flannel.go:78] Wrote subnet file to /run/flannel/subnet.env I1228 14:45:05.489572 3398 flannel.go:82] Running backend. I1228 14:45:05.489621 3398 vxlan_network.go:60] watching for new subnet leases I1228 14:45:05.500788 3398 iptables.go:145] Some iptables rules are missing; deleting and recreating rules I1228 14:45:05.500916 3398 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -j ACCEPT I1228 14:45:05.505674 3398 iptables.go:167] Deleting iptables rule: -d 10.42.0.0/16 -j ACCEPT I1228 14:45:05.510380 3398 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -j ACCEPT I1228 14:45:05.519582 3398 iptables.go:155] Adding iptables rule: -d 10.42.0.0/16 -j ACCEPT I1228 14:45:05.522878 3398 iptables.go:145] Some iptables rules are missing; deleting and recreating rules I1228 14:45:05.522991 3398 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN I1228 14:45:05.529926 3398 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully I1228 14:45:05.535035 3398 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/24 -j RETURN I1228 14:45:05.539813 3398 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully I1228 14:45:05.545013 3398 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN I1228 14:45:05.554149 3398 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully I1228 14:45:05.562892 3398 node.go:172] Successfully retrieved node IP: 192.168.1.120 I1228 14:45:05.563127 3398 server_others.go:143] kube-proxy node IP is an IPv4 address (192.168.1.120), assume IPv4 operation I1228 14:45:05.565805 3398 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/24 -j RETURN I1228 14:45:05.569514 3398 server_others.go:186] Using iptables Proxier. I1228 14:45:05.571897 3398 server.go:650] Version: v1.20.0+k3s2 I1228 14:45:05.576420 3398 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 I1228 14:45:05.576713 3398 conntrack.go:52] Setting nf_conntrack_max to 131072 I1228 14:45:05.579625 3398 conntrack.go:83] Setting conntrack hashsize to 32768 I1228 14:45:05.580631 3398 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully I1228 14:45:05.589153 3398 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I1228 14:45:05.589844 3398 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I1228 14:45:05.591873 3398 config.go:315] Starting service config controller I1228 14:45:05.592025 3398 shared_informer.go:240] Waiting for caches to sync for service config I1228 14:45:05.595167 3398 config.go:224] Starting endpoint slice config controller I1228 14:45:05.595357 3398 shared_informer.go:240] Waiting for caches to sync for endpoint slice config W1228 14:45:05.602526 3398 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/3449c45f-9bc2-418d-ad41-78bdb520a815/volumes" does not exist I1228 14:45:05.694862 3398 shared_informer.go:247] Caches are synced for service config I1228 14:45:05.695889 3398 shared_informer.go:247] Caches are synced for endpoint slice config I1228 14:45:08.147878 3398 topology_manager.go:187] [topologymanager] Topology Admit Handler I1228 14:45:08.257829 3398 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "local-path-provisioner-service-account-token-2kzr8" (UniqueName: "kubernetes.io/secret/b0c5a18c-18ce-4745-996d-1151dcb557be-local-path-provisioner-service-account-token-2kzr8") pod "local-path-provisioner-7c458769fb-kfl59" (UID: "b0c5a18c-18ce-4745-996d-1151dcb557be") I1228 14:45:08.258218 3398 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b0c5a18c-18ce-4745-996d-1151dcb557be-config-volume") pod "local-path-provisioner-7c458769fb-kfl59" (UID: "b0c5a18c-18ce-4745-996d-1151dcb557be") I1228 14:45:09.121369 3398 topology_manager.go:187] [topologymanager] Topology Admit Handler I1228 14:45:09.161621 3398 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "values" (UniqueName: "kubernetes.io/configmap/d5d57057-509d-415b-a215-24a6633296a0-values") pod "helm-install-traefik-p8j9z" (UID: "d5d57057-509d-415b-a215-24a6633296a0") I1228 14:45:09.161853 3398 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "content" (UniqueName: "kubernetes.io/configmap/d5d57057-509d-415b-a215-24a6633296a0-content") pod "helm-install-traefik-p8j9z" (UID: "d5d57057-509d-415b-a215-24a6633296a0") I1228 14:45:09.162129 3398 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "helm-traefik-token-bldqj" (UniqueName: "kubernetes.io/secret/d5d57057-509d-415b-a215-24a6633296a0-helm-traefik-token-bldqj") pod "helm-install-traefik-p8j9z" (UID: "d5d57057-509d-415b-a215-24a6633296a0") I1228 14:45:12.209176 3398 topology_manager.go:187] [topologymanager] Topology Admit Handler I1228 14:45:12.272848 3398 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-dir" (UniqueName: "kubernetes.io/empty-dir/e65b99f0-e3bc-4b97-8abb-ce379449427e-tmp-dir") pod "metrics-server-86cbb8457f-lzn2w" (UID: "e65b99f0-e3bc-4b97-8abb-ce379449427e") I1228 14:45:12.273139 3398 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "metrics-server-token-x5sp4" (UniqueName: "kubernetes.io/secret/e65b99f0-e3bc-4b97-8abb-ce379449427e-metrics-server-token-x5sp4") pod "metrics-server-86cbb8457f-lzn2w" (UID: "e65b99f0-e3bc-4b97-8abb-ce379449427e") I1228 14:45:13.122519 3398 topology_manager.go:187] [topologymanager] Topology Admit Handler I1228 14:45:13.177653 3398 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/be6e2ca9-c393-49c5-bd92-6f9abc008dc7-config-volume") pod "coredns-854c77959c-r9zzq" (UID: "be6e2ca9-c393-49c5-bd92-6f9abc008dc7") I1228 14:45:13.178415 3398 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-pcf72" (UniqueName: "kubernetes.io/secret/be6e2ca9-c393-49c5-bd92-6f9abc008dc7-coredns-token-pcf72") pod "coredns-854c77959c-r9zzq" (UID: "be6e2ca9-c393-49c5-bd92-6f9abc008dc7") W1228 14:47:00.002901 3398 eviction_manager.go:344] eviction manager: attempting to reclaim ephemeral-storage I1228 14:47:00.110740 3398 container_gc.go:85] attempting to delete unused containers I1228 14:47:00.118535 3398 image_gc_manager.go:321] attempting to delete unused images I1228 14:47:00.275716 3398 eviction_manager.go:355] eviction manager: must evict pod(s) to reclaim ephemeral-storage I1228 14:47:00.302528 3398 eviction_manager.go:373] eviction manager: pods ranked for eviction: helm-install-traefik-p8j9z_kube-system(d5d57057-509d-415b-a215-24a6633296a0), coredns-854c77959c-r9zzq_kube-system(be6e2ca9-c393-49c5-bd92-6f9abc008dc7), local-path-provisioner-7c458769fb-kfl59_kube-system(b0c5a18c-18ce-4745-996d-1151dcb557be), metrics-server-86cbb8457f-lzn2w_kube-system(e65b99f0-e3bc-4b97-8abb-ce379449427e) I1228 14:47:08.881689 3398 trace.go:205] Trace[923516253]: "iptables Monitor CANARY check" (28-Dec-2020 14:47:05.599) (total time: 3148ms): Trace[923516253]: [3.148847638s] [3.148847638s] END E1228 14:47:09.010843 3398 remote_runtime.go:227] CreateContainer in sandbox "4bcf420252ebb19ebccff6b86136bb17d65bf05b6f16756e5fee3df603d519d6" from runtime service failed: rpc error: code = Unknown desc = failed to create containerd container: copying of parent failed: failed to copy files: copy file range failed: no space left on device E1228 14:47:09.091161 3398 kuberuntime_manager.go:829] container &Container{Name:helm,Image:rancher/klipper-helm:v0.3.2,Command:[],Args:[install],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NAME,Value:traefik,ValueFrom:nil,},EnvVar{Name:VERSION,Value:,ValueFrom:nil,},EnvVar{Name:REPO,Value:,ValueFrom:nil,},EnvVar{Name:HELM_DRIVER,Value:secret,ValueFrom:nil,},EnvVar{Name:CHART_NAMESPACE,Value:kube-system,ValueFrom:nil,},EnvVar{Name:CHART,Value:https://%{KUBERNETES_API}%/static/charts/traefik-1.81.0.tgz,ValueFrom:nil,},EnvVar{Name:HELM_VERSION,Value:,ValueFrom:nil,},EnvVar{Name:NO_PROXY,Value:.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:values,ReadOnly:false,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:content,ReadOnly:false,MountPath:/chart,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:helm-traefik-token-bldqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod helm-install-traefik-p8j9z_kube-system(d5d57057-509d-415b-a215-24a6633296a0): CreateContainerError: failed to create containerd container: copying of parent failed: failed to copy files: copy file range failed: no space left on device E1228 14:47:09.099981 3398 pod_workers.go:191] Error syncing pod d5d57057-509d-415b-a215-24a6633296a0 ("helm-install-traefik-p8j9z_kube-system(d5d57057-509d-415b-a215-24a6633296a0)"), skipping: failed to "StartContainer" for "helm" with CreateContainerError: "failed to create containerd container: copying of parent failed: failed to copy files: copy file range failed: no space left on device" I1228 14:47:09.752098 3398 reconciler.go:196] operationExecutor.UnmountVolume started for volume "helm-traefik-token-bldqj" (UniqueName: "kubernetes.io/secret/d5d57057-509d-415b-a215-24a6633296a0-helm-traefik-token-bldqj") pod "d5d57057-509d-415b-a215-24a6633296a0" (UID: "d5d57057-509d-415b-a215-24a6633296a0") I1228 14:47:09.764422 3398 reconciler.go:196] operationExecutor.UnmountVolume started for volume "values" (UniqueName: "kubernetes.io/configmap/d5d57057-509d-415b-a215-24a6633296a0-values") pod "d5d57057-509d-415b-a215-24a6633296a0" (UID: "d5d57057-509d-415b-a215-24a6633296a0") I1228 14:47:09.764793 3398 reconciler.go:196] operationExecutor.UnmountVolume started for volume "content" (UniqueName: "kubernetes.io/configmap/d5d57057-509d-415b-a215-24a6633296a0-content") pod "d5d57057-509d-415b-a215-24a6633296a0" (UID: "d5d57057-509d-415b-a215-24a6633296a0") W1228 14:47:09.957167 3398 empty_dir.go:520] Warning: Failed to clear quota on /var/lib/kubelet/pods/d5d57057-509d-415b-a215-24a6633296a0/volumes/kubernetes.io~configmap/content: clearQuota called, but quotas disabled W1228 14:47:09.957167 3398 empty_dir.go:520] Warning: Failed to clear quota on /var/lib/kubelet/pods/d5d57057-509d-415b-a215-24a6633296a0/volumes/kubernetes.io~configmap/values: clearQuota called, but quotas disabled I1228 14:47:10.030176 3398 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5d57057-509d-415b-a215-24a6633296a0-values" (OuterVolumeSpecName: "values") pod "d5d57057-509d-415b-a215-24a6633296a0" (UID: "d5d57057-509d-415b-a215-24a6633296a0"). InnerVolumeSpecName "values". PluginName "kubernetes.io/configmap", VolumeGidValue "" I1228 14:47:10.030099 3398 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5d57057-509d-415b-a215-24a6633296a0-content" (OuterVolumeSpecName: "content") pod "d5d57057-509d-415b-a215-24a6633296a0" (UID: "d5d57057-509d-415b-a215-24a6633296a0"). InnerVolumeSpecName "content". PluginName "kubernetes.io/configmap", VolumeGidValue "" I1228 14:47:10.090082 3398 reconciler.go:319] Volume detached for volume "values" (UniqueName: "kubernetes.io/configmap/d5d57057-509d-415b-a215-24a6633296a0-values") on node "rasp-2" DevicePath "" I1228 14:47:10.090255 3398 reconciler.go:319] Volume detached for volume "content" (UniqueName: "kubernetes.io/configmap/d5d57057-509d-415b-a215-24a6633296a0-content") on node "rasp-2" DevicePath "" I1228 14:47:10.166363 3398 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5d57057-509d-415b-a215-24a6633296a0-helm-traefik-token-bldqj" (OuterVolumeSpecName: "helm-traefik-token-bldqj") pod "d5d57057-509d-415b-a215-24a6633296a0" (UID: "d5d57057-509d-415b-a215-24a6633296a0"). InnerVolumeSpecName "helm-traefik-token-bldqj". PluginName "kubernetes.io/secret", VolumeGidValue "" I1228 14:47:10.190938 3398 reconciler.go:319] Volume detached for volume "helm-traefik-token-bldqj" (UniqueName: "kubernetes.io/secret/d5d57057-509d-415b-a215-24a6633296a0-helm-traefik-token-bldqj") on node "rasp-2" DevicePath "" E1228 14:47:10.371180 3398 eviction_manager.go:587] eviction manager: pod helm-install-traefik-p8j9z_kube-system(d5d57057-509d-415b-a215-24a6633296a0) failed to evict timeout waiting to kill pod I1228 14:47:10.383746 3398 eviction_manager.go:204] eviction manager: pods helm-install-traefik-p8j9z_kube-system(d5d57057-509d-415b-a215-24a6633296a0) evicted, waiting for pod to be cleaned up E1228 14:47:11.599954 3398 kubelet_pods.go:1514] Pod attempted illegal phase transition from Failed to Pending: &PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:10.42.0.4,StartTime: ,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:helm,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:CreateContainerError,Message:failed to create containerd container: copying of parent failed: failed to copy files: copy file range failed: no space left on device,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:rancher/klipper-helm:v0.3.2,ImageID:,ContainerID:,Started:nil,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.42.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},} W1228 14:47:40.384049 3398 eviction_manager.go:408] eviction manager: timed out waiting for pods helm-install-traefik-p8j9z_kube-system(d5d57057-509d-415b-a215-24a6633296a0) to be cleaned up W1228 14:47:40.547342 3398 eviction_manager.go:344] eviction manager: attempting to reclaim ephemeral-storage I1228 14:47:40.547582 3398 container_gc.go:85] attempting to delete unused containers I1228 14:47:40.841514 3398 image_gc_manager.go:321] attempting to delete unused images I1228 14:47:40.969140 3398 eviction_manager.go:355] eviction manager: must evict pod(s) to reclaim ephemeral-storage I1228 14:47:40.969515 3398 eviction_manager.go:373] eviction manager: pods ranked for eviction: coredns-854c77959c-r9zzq_kube-system(be6e2ca9-c393-49c5-bd92-6f9abc008dc7), local-path-provisioner-7c458769fb-kfl59_kube-system(b0c5a18c-18ce-4745-996d-1151dcb557be), metrics-server-86cbb8457f-lzn2w_kube-system(e65b99f0-e3bc-4b97-8abb-ce379449427e) E1228 14:47:40.969717 3398 eviction_manager.go:574] eviction manager: cannot evict a critical pod coredns-854c77959c-r9zzq_kube-system(be6e2ca9-c393-49c5-bd92-6f9abc008dc7) E1228 14:47:40.969815 3398 eviction_manager.go:574] eviction manager: cannot evict a critical pod local-path-provisioner-7c458769fb-kfl59_kube-system(b0c5a18c-18ce-4745-996d-1151dcb557be) E1228 14:47:40.969907 3398 eviction_manager.go:574] eviction manager: cannot evict a critical pod metrics-server-86cbb8457f-lzn2w_kube-system(e65b99f0-e3bc-4b97-8abb-ce379449427e) I1228 14:47:40.969974 3398 eviction_manager.go:396] eviction manager: unable to evict any pods from the node W1228 14:47:51.106114 3398 eviction_manager.go:344] eviction manager: attempting to reclaim ephemeral-storage I1228 14:47:51.106343 3398 container_gc.go:85] attempting to delete unused containers I1228 14:47:51.113685 3398 image_gc_manager.go:321] attempting to delete unused images I1228 14:47:51.315760 3398 eviction_manager.go:355] eviction manager: must evict pod(s) to reclaim ephemeral-storage I1228 14:47:51.316534 3398 eviction_manager.go:373] eviction manager: pods ranked for eviction: coredns-854c77959c-r9zzq_kube-system(be6e2ca9-c393-49c5-bd92-6f9abc008dc7), local-path-provisioner-7c458769fb-kfl59_kube-system(b0c5a18c-18ce-4745-996d-1151dcb557be), metrics-server-86cbb8457f-lzn2w_kube-system(e65b99f0-e3bc-4b97-8abb-ce379449427e) E1228 14:47:51.316751 3398 eviction_manager.go:574] eviction manager: cannot evict a critical pod coredns-854c77959c-r9zzq_kube-system(be6e2ca9-c393-49c5-bd92-6f9abc008dc7) E1228 14:47:51.316878 3398 eviction_manager.go:574] eviction manager: cannot evict a critical pod local-path-provisioner-7c458769fb-kfl59_kube-system(b0c5a18c-18ce-4745-996d-1151dcb557be) E1228 14:47:51.316970 3398 eviction_manager.go:574] eviction manager: cannot evict a critical pod metrics-server-86cbb8457f-lzn2w_kube-system(e65b99f0-e3bc-4b97-8abb-ce379449427e) I1228 14:47:51.317040 3398 eviction_manager.go:396] eviction manager: unable to evict any pods from the node W1228 14:48:01.448607 3398 eviction_manager.go:344] eviction manager: attempting to reclaim ephemeral-storage I1228 14:48:01.448809 3398 container_gc.go:85] attempting to delete unused containers I1228 14:48:01.456531 3398 image_gc_manager.go:321] attempting to delete unused images I1228 14:48:01.579443 3398 eviction_manager.go:355] eviction manager: must evict pod(s) to reclaim ephemeral-storage I1228 14:48:01.579803 3398 eviction_manager.go:373] eviction manager: pods ranked for eviction: coredns-854c77959c-r9zzq_kube-system(be6e2ca9-c393-49c5-bd92-6f9abc008dc7), local-path-provisioner-7c458769fb-kfl59_kube-system(b0c5a18c-18ce-4745-996d-1151dcb557be), metrics-server-86cbb8457f-lzn2w_kube-system(e65b99f0-e3bc-4b97-8abb-ce379449427e) E1228 14:48:01.579967 3398 eviction_manager.go:574] eviction manager: cannot evict a critical pod coredns-854c77959c-r9zzq_kube-system(be6e2ca9-c393-49c5-bd92-6f9abc008dc7) E1228 14:48:01.580091 3398 eviction_manager.go:574] eviction manager: cannot evict a critical pod local-path-provisioner-7c458769fb-kfl59_kube-system(b0c5a18c-18ce-4745-996d-1151dcb557be) E1228 14:48:01.580182 3398 eviction_manager.go:574] eviction manager: cannot evict a critical pod metrics-server-86cbb8457f-lzn2w_kube-system(e65b99f0-e3bc-4b97-8abb-ce379449427e) I1228 14:48:01.580249 3398 eviction_manager.go:396] eviction manager: unable to evict any pods from the node W1228 14:48:11.713589 3398 eviction_manager.go:344] eviction manager: attempting to reclaim ephemeral-storage I1228 14:48:11.713774 3398 container_gc.go:85] attempting to delete unused containers I1228 14:48:11.721076 3398 image_gc_manager.go:321] attempting to delete unused images I1228 14:48:11.834961 3398 eviction_manager.go:355] eviction manager: must evict pod(s) to reclaim ephemeral-storage I1228 14:48:11.835367 3398 eviction_manager.go:373] eviction manager: pods ranked for eviction: coredns-854c77959c-r9zzq_kube-system(be6e2ca9-c393-49c5-bd92-6f9abc008dc7), local-path-provisioner-7c458769fb-kfl59_kube-system(b0c5a18c-18ce-4745-996d-1151dcb557be), metrics-server-86cbb8457f-lzn2w_kube-system(e65b99f0-e3bc-4b97-8abb-ce379449427e) E1228 14:48:11.835541 3398 eviction_manager.go:574] eviction manager: cannot evict a critical pod coredns-854c77959c-r9zzq_kube-system(be6e2ca9-c393-49c5-bd92-6f9abc008dc7) E1228 14:48:11.835641 3398 eviction_manager.go:574] eviction manager: cannot evict a critical pod local-path-provisioner-7c458769fb-kfl59_kube-system(b0c5a18c-18ce-4745-996d-1151dcb557be) E1228 14:48:11.835729 3398 eviction_manager.go:574] eviction manager: cannot evict a critical pod metrics-server-86cbb8457f-lzn2w_kube-system(e65b99f0-e3bc-4b97-8abb-ce379449427e) I1228 14:48:11.835795 3398 eviction_manager.go:396] eviction manager: unable to evict any pods from the node W1228 14:48:21.981424 3398 eviction_manager.go:344] eviction manager: attempting to reclaim ephemeral-storage I1228 14:48:21.981612 3398 container_gc.go:85] attempting to delete unused containers I1228 14:48:21.988872 3398 image_gc_manager.go:321] attempting to delete unused images I1228 14:48:22.148352 3398 eviction_manager.go:355] eviction manager: must evict pod(s) to reclaim ephemeral-storage I1228 14:48:22.148770 3398 eviction_manager.go:373] eviction manager: pods ranked for eviction: coredns-854c77959c-r9zzq_kube-system(be6e2ca9-c393-49c5-bd92-6f9abc008dc7), local-path-provisioner-7c458769fb-kfl59_kube-system(b0c5a18c-18ce-4745-996d-1151dcb557be), metrics-server-86cbb8457f-lzn2w_kube-system(e65b99f0-e3bc-4b97-8abb-ce379449427e) E1228 14:48:22.149018 3398 eviction_manager.go:574] eviction manager: cannot evict a critical pod coredns-854c77959c-r9zzq_kube-system(be6e2ca9-c393-49c5-bd92-6f9abc008dc7) E1228 14:48:22.149140 3398 eviction_manager.go:574] eviction manager: cannot evict a critical pod local-path-provisioner-7c458769fb-kfl59_kube-system(b0c5a18c-18ce-4745-996d-1151dcb557be) E1228 14:48:22.149248 3398 eviction_manager.go:574] eviction manager: cannot evict a critical pod metrics-server-86cbb8457f-lzn2w_kube-system(e65b99f0-e3bc-4b97-8abb-ce379449427e) I1228 14:48:22.149380 3398 eviction_manager.go:396] eviction manager: unable to evict any pods from the node
These commands have been run after resetting using the reset.yml
playbook in k3s-ansible, changing the version to v1.20.0+k3s2 and reinstalled using the site.yml
playbook. To be precise, the exact configuration I'm using with Alpine can be found here but I don't think the differences really matter: https://github.com/k3s-io/k3s-ansible/pull/107 I just have a doubt about the logs where the agent tries to evict the pods, which I believe they are from the previous install. I don't know how that would be possible though since I wipe out these files during the reset:
- /etc/init.d/k3s
- /etc/runlevels/default/k3s
- /etc/rancher/k3s
- /var/lib/rancher/k3s
- /var/lib/kubelet
- /usr/local/bin/k3s
If you need any other info let me know, I know it's hard to debug these issues 😅
I forgot to mention that in the latest logs I've ran the server with extra args --disable-agent --snapshotter native
and the agent with --snapshotter native
, but I've had the same issue even before. These args have been added after #2755
I can run again without these args if it's of any help
I got suspicious with the pods from the previous installs not being removed and I discovered that files and folders weren't actually deleted by containerd from the overlay fs. I'm not sure where exactly the issue lies, if it's containerd with native
snapshotter or just how overlay fs works on Alpine. Nevertheless, actually resetting the /var/lib/rancher
folder and reinstalling v1.20.0+k3s2 works, although the TLS handshake
errors remain at the start of the server. But then agents are able to join the cluster and there are no more TLS errors.
Updated log from the server:
time="2020-12-28T16:43:19Z" level=info msg="Acquiring lock file /var/lib/rancher/k3s/data/.lock" time="2020-12-28T16:43:19Z" level=info msg="Preparing data dir /var/lib/rancher/k3s/data/477a397d5ed8de53c8cee46d5f565f81e93de8dadbff870dc27452b47238f6f5" time="2020-12-28T16:43:52.184972493Z" level=info msg="Starting k3s v1.20.0+k3s2 (2ea6b163)" time="2020-12-28T16:43:52.193725120Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" time="2020-12-28T16:43:52.194542717Z" level=info msg="Configuring database table schema and indexes, this may take a moment..." time="2020-12-28T16:43:52.201426872Z" level=info msg="Database tables and indexes are up to date" time="2020-12-28T16:43:52.210362415Z" level=info msg="Kine listening on unix://kine.sock" time="2020-12-28T16:43:52.362548250Z" level=info msg="certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1609173832: notBefore=2020-12-28 16:43:52 +0000 UTC notAfter=2021-12-28 16:43:52 +0000 UTC" time="2020-12-28T16:43:52.368474237Z" level=info msg="certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1609173832: notBefore=2020-12-28 16:43:52 +0000 UTC notAfter=2021-12-28 16:43:52 +0000 UTC" time="2020-12-28T16:43:52.374500015Z" level=info msg="certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1609173832: notBefore=2020-12-28 16:43:52 +0000 UTC notAfter=2021-12-28 16:43:52 +0000 UTC" time="2020-12-28T16:43:52.380407512Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-client-ca@1609173832: notBefore=2020-12-28 16:43:52 +0000 UTC notAfter=2021-12-28 16:43:52 +0000 UTC" time="2020-12-28T16:43:52.386324176Z" level=info msg="certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1609173832: notBefore=2020-12-28 16:43:52 +0000 UTC notAfter=2021-12-28 16:43:52 +0000 UTC" time="2020-12-28T16:43:52.391998291Z" level=info msg="certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1609173832: notBefore=2020-12-28 16:43:52 +0000 UTC notAfter=2021-12-28 16:43:52 +0000 UTC" time="2020-12-28T16:43:52.398313909Z" level=info msg="certificate CN=cloud-controller-manager signed by CN=k3s-client-ca@1609173832: notBefore=2020-12-28 16:43:52 +0000 UTC notAfter=2021-12-28 16:43:52 +0000 UTC" time="2020-12-28T16:43:52.408482722Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1609173832: notBefore=2020-12-28 16:43:52 +0000 UTC notAfter=2021-12-28 16:43:52 +0000 UTC" time="2020-12-28T16:43:52.418334454Z" level=info msg="certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1609173832: notBefore=2020-12-28 16:43:52 +0000 UTC notAfter=2021-12-28 16:43:52 +0000 UTC" time="2020-12-28T16:43:52.427866293Z" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1609173832: notBefore=2020-12-28 16:43:52 +0000 UTC notAfter=2021-12-28 16:43:52 +0000 UTC" time="2020-12-28T16:43:52.433624052Z" level=info msg="certificate CN=etcd-client signed by CN=etcd-server-ca@1609173832: notBefore=2020-12-28 16:43:52 +0000 UTC notAfter=2021-12-28 16:43:52 +0000 UTC" time="2020-12-28T16:43:52.443421462Z" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1609173832: notBefore=2020-12-28 16:43:52 +0000 UTC notAfter=2021-12-28 16:43:52 +0000 UTC" time="2020-12-28T16:43:53.550722390Z" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1609173832: notBefore=2020-12-28 16:43:52 +0000 UTC notAfter=2021-12-28 16:43:53 +0000 UTC" time="2020-12-28T16:43:53.553166742Z" level=info msg="Active TLS secret (ver=) (count 7): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-192.168.1.143:192.168.1.143 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=4F104FF2B5BF9444A7A539873D471E752AE34EF1]" time="2020-12-28T16:43:53.668766144Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --feature-gates=ServiceAccountIssuerDiscovery=false --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24. I1228 16:43:53.679718 2850 server.go:659] external host was not specified, using 192.168.1.143 I1228 16:43:53.681905 2850 server.go:196] Version: v1.20.0+k3s2 I1228 16:44:02.595366 2850 shared_informer.go:240] Waiting for caches to sync for node_authorizer I1228 16:44:02.614949 2850 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I1228 16:44:02.615095 2850 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I1228 16:44:02.636012 2850 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I1228 16:44:02.636176 2850 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I1228 16:44:02.860701 2850 instance.go:289] Using reconciler: lease I1228 16:44:03.079532 2850 rest.go:131] the default service ipfamily for this cluster is: IPv4 W1228 16:44:06.291970 2850 genericapiserver.go:419] Skipping API batch/v2alpha1 because it has no resources. W1228 16:44:06.432728 2850 genericapiserver.go:419] Skipping API discovery.k8s.io/v1alpha1 because it has no resources. W1228 16:44:06.595535 2850 genericapiserver.go:419] Skipping API node.k8s.io/v1alpha1 because it has no resources. W1228 16:44:06.719304 2850 genericapiserver.go:419] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W1228 16:44:06.774107 2850 genericapiserver.go:419] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. W1228 16:44:06.861311 2850 genericapiserver.go:419] Skipping API storage.k8s.io/v1alpha1 because it has no resources. W1228 16:44:06.899279 2850 genericapiserver.go:419] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources. W1228 16:44:06.974540 2850 genericapiserver.go:419] Skipping API apps/v1beta2 because it has no resources. W1228 16:44:06.974673 2850 genericapiserver.go:419] Skipping API apps/v1beta1 because it has no resources. I1228 16:44:07.108177 2850 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I1228 16:44:07.108354 2850 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. time="2020-12-28T16:44:07.193713527Z" level=info msg="Waiting for API server to become available" time="2020-12-28T16:44:07.193737485Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0" time="2020-12-28T16:44:07.199448682Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true" time="2020-12-28T16:44:07.289482278Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token" time="2020-12-28T16:44:07.289806025Z" level=info msg="To join node to cluster: k3s agent -s https://192.168.1.143:6443 -t ${NODE_TOKEN}" time="2020-12-28T16:44:07.301596489Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml" time="2020-12-28T16:44:07.301760863Z" level=info msg="Run: k3s kubectl" I1228 16:44:17.286651 2850 trace.go:205] Trace[1449785991]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (28-Dec-2020 16:44:07.282) (total time: 10004ms): Trace[1449785991]: [10.004079352s] [10.004079352s] END E1228 16:44:17.286813 2850 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://127.0.0.1:6444/api/v1/nodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout time="2020-12-28T16:44:17.311481985Z" level=info msg="Waiting for cloudcontroller rbac role to be created" I1228 16:44:28.227608 2850 trace.go:205] Trace[865489794]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (28-Dec-2020 16:44:18.221) (total time: 10005ms): Trace[865489794]: [10.005997042s] [10.005997042s] END E1228 16:44:28.227765 2850 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://127.0.0.1:6444/api/v1/nodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout time="2020-12-28T16:44:28.337757268Z" level=info msg="Waiting for cloudcontroller rbac role to be created" I1228 16:44:35.382037 2850 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt I1228 16:44:35.382045 2850 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt I1228 16:44:35.384146 2850 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key I1228 16:44:35.389921 2850 secure_serving.go:197] Serving securely on 127.0.0.1:6444 I1228 16:44:35.390046 2850 tlsconfig.go:240] Starting DynamicServingCertificateController I1228 16:44:35.390437 2850 apiservice_controller.go:97] Starting APIServiceRegistrationController I1228 16:44:35.390513 2850 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I1228 16:44:35.395183 2850 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I1228 16:44:35.395368 2850 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller I1228 16:44:35.395537 2850 available_controller.go:475] Starting AvailableConditionController I1228 16:44:35.395587 2850 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I1228 16:44:35.395754 2850 autoregister_controller.go:141] Starting autoregister controller I1228 16:44:35.395814 2850 cache.go:32] Waiting for caches to sync for autoregister controller I1228 16:44:35.395972 2850 controller.go:83] Starting OpenAPI AggregationController I1228 16:44:35.396381 2850 apf_controller.go:249] Starting API Priority and Fairness config controller I1228 16:44:35.399780 2850 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt I1228 16:44:35.400100 2850 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt I1228 16:44:35.403301 2850 crdregistration_controller.go:111] Starting crd-autoregister controller I1228 16:44:35.403406 2850 shared_informer.go:240] Waiting for caches to sync for crd-autoregister I1228 16:44:35.411536 2850 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key I1228 16:44:35.411975 2850 customresource_discovery_controller.go:209] Starting DiscoveryController I1228 16:44:35.437826 2850 controller.go:86] Starting OpenAPI controller I1228 16:44:35.438052 2850 naming_controller.go:291] Starting NamingConditionController I1228 16:44:35.438194 2850 establishing_controller.go:76] Starting EstablishingController I1228 16:44:35.438447 2850 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I1228 16:44:35.438572 2850 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I1228 16:44:35.438701 2850 crd_finalizer.go:266] Starting CRDFinalizer I1228 16:44:36.381821 2850 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I1228 16:44:36.381992 2850 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). E1228 16:44:36.807636 2850 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "cloud-controller-manager" cannot list resource "nodes" in API group "" at the cluster scope time="2020-12-28T16:44:36.851418602Z" level=info msg="Waiting for cloudcontroller rbac role to be created" I1228 16:44:36.898551 2850 cache.go:39] Caches are synced for APIServiceRegistrationController controller I1228 16:44:36.898789 2850 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I1228 16:44:36.899022 2850 cache.go:39] Caches are synced for autoregister controller I1228 16:44:36.906058 2850 shared_informer.go:247] Caches are synced for crd-autoregister I1228 16:44:36.996034 2850 cache.go:39] Caches are synced for AvailableConditionController controller I1228 16:44:36.996881 2850 apf_controller.go:253] Running API Priority and Fairness config worker I1228 16:44:37.036810 2850 controller.go:606] quota admission added evaluator for: namespaces I1228 16:44:37.109943 2850 shared_informer.go:247] Caches are synced for node_authorizer I1228 16:44:37.152970 2850 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000 I1228 16:44:37.425660 2850 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000 I1228 16:44:37.425797 2850 storage_scheduling.go:148] all system priority classes are created successfully or already exist. E1228 16:44:37.578674 2850 controller.go:151] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.43.0.1"}: failed to allocated ip:10.43.0.1 with error:cannot allocate resources of type serviceipallocations at this time E1228 16:44:37.653600 2850 controller.go:156] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.1.143, ResourceVersion: 0, AdditionalErrorMsg: time="2020-12-28T16:44:37.982043690Z" level=info msg="Waiting for cloudcontroller rbac role to be created" time="2020-12-28T16:44:39.023532777Z" level=info msg="Waiting for cloudcontroller rbac role to be created" time="2020-12-28T16:44:40.067774183Z" level=info msg="Waiting for cloudcontroller rbac role to be created" E1228 16:44:40.938911 2850 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "cloud-controller-manager" cannot list resource "nodes" in API group "" at the cluster scope time="2020-12-28T16:44:41.110720288Z" level=info msg="Waiting for cloudcontroller rbac role to be created" I1228 16:44:41.339523 2850 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io I1228 16:44:41.649257 2850 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io W1228 16:44:42.114005 2850 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.1.143] I1228 16:44:42.123988 2850 controller.go:606] quota admission added evaluator for: endpoints time="2020-12-28T16:44:42.157808907Z" level=info msg="Waiting for cloudcontroller rbac role to be created" I1228 16:44:42.165198 2850 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io Flag --address has been deprecated, see --bind-address instead. time="2020-12-28T16:44:42.218783709Z" level=info msg="Kube API server is now running" time="2020-12-28T16:44:42.219050737Z" level=info msg="k3s is up and running" I1228 16:44:42.268767 2850 controllermanager.go:176] Version: v1.20.0+k3s2 I1228 16:44:42.272446 2850 deprecated_insecure_serving.go:53] Serving insecurely on 127.0.0.1:10252 time="2020-12-28T16:44:42.294749883Z" level=info msg="Creating CRD addons.k3s.cattle.io" time="2020-12-28T16:44:42.337996671Z" level=info msg="Creating CRD helmcharts.helm.cattle.io" time="2020-12-28T16:44:42.441602486Z" level=info msg="Creating CRD helmchartconfigs.helm.cattle.io" W1228 16:44:42.465150 2850 authorization.go:47] Authorization is disabled W1228 16:44:42.465298 2850 authentication.go:40] Authentication is disabled I1228 16:44:42.465374 2850 deprecated_insecure_serving.go:51] Serving healthz insecurely on 127.0.0.1:10251 time="2020-12-28T16:44:42.589782941Z" level=info msg="Waiting for CRD addons.k3s.cattle.io to become available" time="2020-12-28T16:44:43.111743073Z" level=info msg="Done waiting for CRD addons.k3s.cattle.io to become available" time="2020-12-28T16:44:43.111909582Z" level=info msg="Waiting for CRD helmcharts.helm.cattle.io to become available" time="2020-12-28T16:44:43.198727322Z" level=info msg="Waiting for cloudcontroller rbac role to be created" time="2020-12-28T16:44:43.633056856Z" level=info msg="Done waiting for CRD helmcharts.helm.cattle.io to become available" time="2020-12-28T16:44:43.633281802Z" level=info msg="Waiting for CRD helmchartconfigs.helm.cattle.io to become available" I1228 16:44:44.183518 2850 request.go:655] Throttling request took 1.002821267s, request: GET:https://127.0.0.1:6444/apis/networking.k8s.io/v1beta1?timeout=32s time="2020-12-28T16:44:44.186916120Z" level=info msg="Done waiting for CRD helmchartconfigs.helm.cattle.io to become available" time="2020-12-28T16:44:44.279143029Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.81.0.tgz" time="2020-12-28T16:44:44.280503798Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml" time="2020-12-28T16:44:44.281417383Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml" time="2020-12-28T16:44:44.282197168Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml" time="2020-12-28T16:44:44.282991119Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml" time="2020-12-28T16:44:44.283734081Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml" time="2020-12-28T16:44:44.284753707Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml" time="2020-12-28T16:44:44.285665521Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml" time="2020-12-28T16:44:44.286524107Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml" time="2020-12-28T16:44:44.287298058Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml" time="2020-12-28T16:44:44.287921438Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml" time="2020-12-28T16:44:44.288596953Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml" time="2020-12-28T16:44:44.289112729Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml" time="2020-12-28T16:44:44.318290949Z" level=info msg="Waiting for cloudcontroller rbac role to be created" time="2020-12-28T16:44:44.694737684Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller" time="2020-12-28T16:44:44.696314180Z" level=info msg="Starting /v1, Kind=Secret controller" time="2020-12-28T16:44:44.799917703Z" level=info msg="Active TLS secret k3s-serving (ver=215) (count 7): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-192.168.1.143:192.168.1.143 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=4F104FF2B5BF9444A7A539873D471E752AE34EF1]" time="2020-12-28T16:44:45.031095153Z" level=info msg="Cluster dns configmap has been set successfully" time="2020-12-28T16:44:45.072432063Z" level=info msg="Starting /v1, Kind=Node controller" time="2020-12-28T16:44:45.072784039Z" level=info msg="Starting /v1, Kind=Service controller" time="2020-12-28T16:44:45.073338513Z" level=info msg="Starting /v1, Kind=Pod controller" time="2020-12-28T16:44:45.073629240Z" level=info msg="Starting /v1, Kind=Endpoints controller" I1228 16:44:45.102014 2850 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io I1228 16:44:45.167102 2850 shared_informer.go:240] Waiting for caches to sync for tokens I1228 16:44:45.268251 2850 shared_informer.go:247] Caches are synced for tokens I1228 16:44:45.362957 2850 controller.go:606] quota admission added evaluator for: serviceaccounts time="2020-12-28T16:44:45.369101565Z" level=info msg="Running cloud-controller-manager for provider k3s with args --profiling=false" I1228 16:44:45.369935 2850 controllermanager.go:127] Version: v1.20.0+k3s2 I1228 16:44:45.391793 2850 controllermanager.go:554] Started "csrcleaner" I1228 16:44:45.393015 2850 cleaner.go:82] Starting CSR cleaner controller time="2020-12-28T16:44:45.501069827Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChartConfig controller" time="2020-12-28T16:44:45.501197378Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller" time="2020-12-28T16:44:45.501321751Z" level=info msg="Starting batch/v1, Kind=Job controller" I1228 16:44:45.758556 2850 node_ipam_controller.go:91] Sending events to api server. I1228 16:44:45.996873 2850 controller.go:606] quota admission added evaluator for: deployments.apps I1228 16:44:46.149741 2850 node_controller.go:115] Sending events to api server. I1228 16:44:46.150478 2850 controllermanager.go:238] Started "cloud-node" I1228 16:44:46.150619 2850 node_controller.go:154] Waiting for informer caches to sync I1228 16:44:46.167115 2850 node_lifecycle_controller.go:77] Sending events to api server I1228 16:44:46.167495 2850 controllermanager.go:238] Started "cloud-node-lifecycle" I1228 16:44:48.608528 2850 controller.go:606] quota admission added evaluator for: helmcharts.helm.cattle.io I1228 16:44:48.859963 2850 controller.go:606] quota admission added evaluator for: jobs.batch I1228 16:44:56.320360 2850 range_allocator.go:82] Sending events to api server. I1228 16:44:56.321699 2850 range_allocator.go:110] No Service CIDR provided. Skipping filtering out service addresses. I1228 16:44:56.321807 2850 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses. I1228 16:44:56.321995 2850 controllermanager.go:554] Started "nodeipam" I1228 16:44:56.322128 2850 node_ipam_controller.go:159] Starting ipam controller I1228 16:44:56.322188 2850 shared_informer.go:240] Waiting for caches to sync for node I1228 16:44:56.392926 2850 controllermanager.go:554] Started "podgc" I1228 16:44:56.393505 2850 gc_controller.go:89] Starting GC controller I1228 16:44:56.393572 2850 shared_informer.go:240] Waiting for caches to sync for GC I1228 16:44:56.471116 2850 controllermanager.go:554] Started "serviceaccount" I1228 16:44:56.472386 2850 serviceaccounts_controller.go:117] Starting service account controller I1228 16:44:56.472485 2850 shared_informer.go:240] Waiting for caches to sync for service account I1228 16:44:56.558693 2850 controllermanager.go:554] Started "deployment" I1228 16:44:56.559816 2850 deployment_controller.go:153] Starting deployment controller I1228 16:44:56.559918 2850 shared_informer.go:240] Waiting for caches to sync for deployment I1228 16:44:56.637280 2850 controllermanager.go:554] Started "replicaset" I1228 16:44:56.638512 2850 replica_set.go:182] Starting replicaset controller I1228 16:44:56.638614 2850 shared_informer.go:240] Waiting for caches to sync for ReplicaSet I1228 16:44:56.717496 2850 controllermanager.go:554] Started "statefulset" I1228 16:44:56.718585 2850 stateful_set.go:146] Starting stateful set controller I1228 16:44:56.718685 2850 shared_informer.go:240] Waiting for caches to sync for stateful set I1228 16:44:56.748736 2850 controllermanager.go:554] Started "csrapproving" I1228 16:44:56.749738 2850 certificate_controller.go:118] Starting certificate controller "csrapproving" I1228 16:44:56.749846 2850 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving I1228 16:44:56.778075 2850 node_lifecycle_controller.go:380] Sending events to api server. I1228 16:44:56.780020 2850 taint_manager.go:163] Sending events to api server. I1228 16:44:56.780719 2850 node_lifecycle_controller.go:508] Controller will reconcile labels. I1228 16:44:56.781006 2850 controllermanager.go:554] Started "nodelifecycle" I1228 16:44:56.781157 2850 node_lifecycle_controller.go:542] Starting node controller I1228 16:44:56.781363 2850 shared_informer.go:240] Waiting for caches to sync for taint I1228 16:44:56.856358 2850 controllermanager.go:554] Started "persistentvolume-binder" I1228 16:44:56.856503 2850 pv_controller_base.go:307] Starting persistent volume controller I1228 16:44:56.856636 2850 shared_informer.go:240] Waiting for caches to sync for persistent volume I1228 16:44:56.947003 2850 controllermanager.go:554] Started "attachdetach" I1228 16:44:56.948043 2850 attach_detach_controller.go:328] Starting attach detach controller I1228 16:44:56.948145 2850 shared_informer.go:240] Waiting for caches to sync for attach detach I1228 16:44:57.038569 2850 controllermanager.go:554] Started "endpointslicemirroring" I1228 16:44:57.039741 2850 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller I1228 16:44:57.039850 2850 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring I1228 16:44:57.071260 2850 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-serving" I1228 16:44:57.071444 2850 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving I1228 16:44:57.071604 2850 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key I1228 16:44:57.074840 2850 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-client" I1228 16:44:57.074979 2850 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client I1228 16:44:57.075115 2850 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key I1228 16:44:57.078601 2850 certificate_controller.go:118] Starting certificate controller "csrsigning-kube-apiserver-client" I1228 16:44:57.078730 2850 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client I1228 16:44:57.078870 2850 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key I1228 16:44:57.082268 2850 controllermanager.go:554] Started "csrsigning" W1228 16:44:57.082400 2850 controllermanager.go:533] "bootstrapsigner" is disabled I1228 16:44:57.082498 2850 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown" I1228 16:44:57.082569 2850 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown I1228 16:44:57.082716 2850 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key I1228 16:44:57.174861 2850 controllermanager.go:554] Started "endpoint" I1228 16:44:57.174994 2850 endpoints_controller.go:184] Starting endpoint controller I1228 16:44:57.175059 2850 shared_informer.go:240] Waiting for caches to sync for endpoint I1228 16:44:57.252059 2850 controllermanager.go:554] Started "endpointslice" W1228 16:44:57.252280 2850 controllermanager.go:533] "tokencleaner" is disabled I1228 16:44:57.253622 2850 endpointslice_controller.go:237] Starting endpoint slice controller I1228 16:44:57.253738 2850 shared_informer.go:240] Waiting for caches to sync for endpoint_slice I1228 16:44:57.328167 2850 controllermanager.go:554] Started "persistentvolume-expander" I1228 16:44:57.329360 2850 expand_controller.go:310] Starting expand controller I1228 16:44:57.329479 2850 shared_informer.go:240] Waiting for caches to sync for expand I1228 16:44:57.407430 2850 controllermanager.go:554] Started "clusterrole-aggregation" I1228 16:44:57.409631 2850 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator I1228 16:44:57.409741 2850 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator I1228 16:44:57.489382 2850 controllermanager.go:554] Started "root-ca-cert-publisher" I1228 16:44:57.490637 2850 publisher.go:98] Starting root CA certificate configmap publisher I1228 16:44:57.490719 2850 shared_informer.go:240] Waiting for caches to sync for crt configmap I1228 16:44:57.571565 2850 controllermanager.go:554] Started "daemonset" I1228 16:44:57.572692 2850 daemon_controller.go:285] Starting daemon sets controller I1228 16:44:57.572788 2850 shared_informer.go:240] Waiting for caches to sync for daemon sets I1228 16:44:58.995033 2850 controllermanager.go:554] Started "horizontalpodautoscaling" W1228 16:44:58.995179 2850 controllermanager.go:546] Skipping "ephemeral-volume" I1228 16:44:58.995353 2850 horizontal.go:169] Starting HPA controller I1228 16:44:58.995419 2850 shared_informer.go:240] Waiting for caches to sync for HPA I1228 16:44:59.064529 2850 controllermanager.go:554] Started "job" I1228 16:44:59.064670 2850 job_controller.go:148] Starting job controller I1228 16:44:59.064733 2850 shared_informer.go:240] Waiting for caches to sync for job I1228 16:44:59.156597 2850 controllermanager.go:554] Started "disruption" I1228 16:44:59.157754 2850 disruption.go:331] Starting disruption controller I1228 16:44:59.157842 2850 shared_informer.go:240] Waiting for caches to sync for disruption I1228 16:44:59.227084 2850 controllermanager.go:554] Started "ttl" W1228 16:44:59.227197 2850 controllermanager.go:533] "route" is disabled I1228 16:44:59.228113 2850 ttl_controller.go:121] Starting TTL controller I1228 16:44:59.228183 2850 shared_informer.go:240] Waiting for caches to sync for TTL I1228 16:44:59.301313 2850 controllermanager.go:554] Started "cronjob" I1228 16:44:59.302203 2850 cronjob_controller.go:96] Starting CronJob Manager E1228 16:44:59.601144 2850 namespaced_resources_deleter.go:161] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request I1228 16:44:59.601832 2850 controllermanager.go:554] Started "namespace" W1228 16:44:59.601906 2850 controllermanager.go:533] "cloud-node-lifecycle" is disabled I1228 16:44:59.602006 2850 namespace_controller.go:200] Starting namespace controller I1228 16:44:59.602064 2850 shared_informer.go:240] Waiting for caches to sync for namespace I1228 16:44:59.670797 2850 controllermanager.go:554] Started "pv-protection" W1228 16:44:59.670926 2850 controllermanager.go:546] Skipping "ttl-after-finished" I1228 16:44:59.670963 2850 pv_protection_controller.go:83] Starting PV protection controller I1228 16:44:59.671021 2850 shared_informer.go:240] Waiting for caches to sync for PV protection I1228 16:44:59.742061 2850 controllermanager.go:554] Started "replicationcontroller" I1228 16:44:59.743277 2850 replica_set.go:182] Starting replicationcontroller controller I1228 16:44:59.743393 2850 shared_informer.go:240] Waiting for caches to sync for ReplicationController E1228 16:45:00.470633 2850 resource_quota_controller.go:162] initial discovery check failure, continuing and counting on future sync update: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request I1228 16:45:00.471300 2850 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling I1228 16:45:00.471752 2850 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy I1228 16:45:00.472175 2850 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for helmchartconfigs.helm.cattle.io I1228 16:45:00.472419 2850 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io I1228 16:45:00.472579 2850 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for leases.coordination.k8s.io I1228 16:45:00.472850 2850 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for daemonsets.apps I1228 16:45:00.473030 2850 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for replicasets.apps I1228 16:45:00.473295 2850 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for events.events.k8s.io I1228 16:45:00.474291 2850 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.extensions I1228 16:45:00.474470 2850 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for deployments.apps I1228 16:45:00.474668 2850 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for statefulsets.apps I1228 16:45:00.474822 2850 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for jobs.batch I1228 16:45:00.475089 2850 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for limitranges I1228 16:45:00.475339 2850 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for podtemplates I1228 16:45:00.475618 2850 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpoints I1228 16:45:00.475839 2850 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io I1228 16:45:00.476007 2850 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io I1228 16:45:00.476858 2850 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for serviceaccounts I1228 16:45:00.477050 2850 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for controllerrevisions.apps I1228 16:45:00.477432 2850 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io I1228 16:45:00.477723 2850 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for helmcharts.helm.cattle.io I1228 16:45:00.477981 2850 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for addons.k3s.cattle.io I1228 16:45:00.478194 2850 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for cronjobs.batch I1228 16:45:00.478448 2850 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io I1228 16:45:00.478568 2850 controllermanager.go:554] Started "resourcequota" I1228 16:45:00.478637 2850 resource_quota_controller.go:273] Starting resource quota controller I1228 16:45:00.478708 2850 shared_informer.go:240] Waiting for caches to sync for resource quota I1228 16:45:00.478892 2850 resource_quota_monitor.go:304] QuotaMonitor running I1228 16:45:00.585891 2850 garbagecollector.go:142] Starting garbage collector controller I1228 16:45:00.586007 2850 shared_informer.go:240] Waiting for caches to sync for garbage collector I1228 16:45:00.586144 2850 graph_builder.go:289] GraphBuilder running I1228 16:45:00.587438 2850 controllermanager.go:554] Started "garbagecollector" W1228 16:45:00.587570 2850 controllermanager.go:533] "service" is disabled I1228 16:45:00.679350 2850 controllermanager.go:554] Started "pvc-protection" I1228 16:45:00.681566 2850 pvc_protection_controller.go:110] Starting PVC protection controller I1228 16:45:00.681693 2850 shared_informer.go:240] Waiting for caches to sync for PVC protection I1228 16:45:00.832813 2850 shared_informer.go:247] Caches are synced for TTL I1228 16:45:00.839990 2850 shared_informer.go:247] Caches are synced for node I1228 16:45:00.840197 2850 range_allocator.go:172] Starting range CIDR allocator I1228 16:45:00.840300 2850 shared_informer.go:240] Waiting for caches to sync for cidrallocator I1228 16:45:00.840349 2850 shared_informer.go:247] Caches are synced for cidrallocator I1228 16:45:00.962391 2850 shared_informer.go:247] Caches are synced for certificate-csrapproving I1228 16:45:00.972132 2850 shared_informer.go:247] Caches are synced for PV protection I1228 16:45:00.972958 2850 shared_informer.go:247] Caches are synced for service account I1228 16:45:00.982808 2850 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I1228 16:45:00.983030 2850 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client I1228 16:45:00.989388 2850 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving I1228 16:45:00.989608 2850 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client I1228 16:45:01.004445 2850 shared_informer.go:247] Caches are synced for namespace I1228 16:45:01.029877 2850 shared_informer.go:247] Caches are synced for expand I1228 16:45:01.046933 2850 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I1228 16:45:01.073554 2850 shared_informer.go:247] Caches are synced for daemon sets I1228 16:45:01.081384 2850 shared_informer.go:247] Caches are synced for endpoint I1228 16:45:01.081922 2850 shared_informer.go:247] Caches are synced for taint I1228 16:45:01.082764 2850 taint_manager.go:187] Starting NoExecuteTaintManager I1228 16:45:01.084102 2850 shared_informer.go:247] Caches are synced for PVC protection I1228 16:45:01.093769 2850 shared_informer.go:247] Caches are synced for GC I1228 16:45:01.097708 2850 shared_informer.go:247] Caches are synced for HPA I1228 16:45:01.110060 2850 shared_informer.go:247] Caches are synced for ClusterRoleAggregator I1228 16:45:01.119138 2850 shared_informer.go:247] Caches are synced for stateful set I1228 16:45:01.143642 2850 shared_informer.go:247] Caches are synced for ReplicationController I1228 16:45:01.144105 2850 shared_informer.go:247] Caches are synced for ReplicaSet I1228 16:45:01.149417 2850 shared_informer.go:247] Caches are synced for attach detach I1228 16:45:01.154164 2850 shared_informer.go:247] Caches are synced for endpoint_slice I1228 16:45:01.158141 2850 shared_informer.go:247] Caches are synced for disruption I1228 16:45:01.158340 2850 disruption.go:339] Sending events to api server. I1228 16:45:01.158711 2850 shared_informer.go:247] Caches are synced for persistent volume I1228 16:45:01.166493 2850 shared_informer.go:247] Caches are synced for deployment I1228 16:45:01.166980 2850 shared_informer.go:247] Caches are synced for job I1228 16:45:01.186959 2850 shared_informer.go:247] Caches are synced for resource quota I1228 16:45:01.197121 2850 shared_informer.go:247] Caches are synced for crt configmap I1228 16:45:01.606139 2850 controller.go:606] quota admission added evaluator for: replicasets.apps I1228 16:45:01.624312 2850 request.go:655] Throttling request took 1.047675124s, request: GET:https://127.0.0.1:6444/apis/extensions/v1beta1?timeout=32s I1228 16:45:01.680492 2850 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-86cbb8457f to 1" I1228 16:45:01.788733 2850 event.go:291] "Event occurred" object="kube-system/local-path-provisioner" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set local-path-provisioner-7c458769fb to 1" I1228 16:45:01.788961 2850 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-854c77959c to 1" I1228 16:45:01.790893 2850 event.go:291] "Event occurred" object="kube-system/helm-install-traefik" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: helm-install-traefik-cz6fm" I1228 16:45:01.802639 2850 controller.go:606] quota admission added evaluator for: events.events.k8s.io I1228 16:45:02.318926 2850 event.go:291] "Event occurred" object="kube-system/local-path-provisioner-7c458769fb" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: local-path-provisioner-7c458769fb-sssm6" I1228 16:45:02.319131 2850 event.go:291] "Event occurred" object="kube-system/coredns-854c77959c" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-854c77959c-hfs5g" I1228 16:45:02.344526 2850 event.go:291] "Event occurred" object="kube-system/metrics-server-86cbb8457f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-86cbb8457f-9bs2q" E1228 16:45:02.618291 2850 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request I1228 16:45:02.618836 2850 shared_informer.go:240] Waiting for caches to sync for resource quota I1228 16:45:02.618977 2850 shared_informer.go:247] Caches are synced for resource quota W1228 16:45:03.349237 2850 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] E1228 16:45:03.474281 2850 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1228 16:45:04.014395 2850 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request I1228 16:45:04.043834 2850 shared_informer.go:240] Waiting for caches to sync for garbage collector I1228 16:45:04.144805 2850 shared_informer.go:247] Caches are synced for garbage collector I1228 16:45:04.187172 2850 shared_informer.go:247] Caches are synced for garbage collector I1228 16:45:04.187340 2850 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage W1228 16:45:08.393310 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:45:08.393592 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:45:08.393671 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E1228 16:45:33.758378 2850 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request I1228 16:45:35.895377 2850 request.go:655] Throttling request took 1.041272973s, request: GET:https://127.0.0.1:6444/apis/scheduling.k8s.io/v1beta1?timeout=32s W1228 16:45:36.853566 2850 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] time="2020-12-28T16:45:39.724343219Z" level=info msg="Cluster-Http-Server 2020/12/28 16:45:39 http: TLS handshake error from 192.168.1.120:44132: remote error: tls: bad certificate" time="2020-12-28T16:45:39.834524127Z" level=info msg="Cluster-Http-Server 2020/12/28 16:45:39 http: TLS handshake error from 192.168.1.142:37410: remote error: tls: bad certificate" time="2020-12-28T16:45:39.972116136Z" level=info msg="Cluster-Http-Server 2020/12/28 16:45:39 http: TLS handshake error from 192.168.1.120:44144: remote error: tls: bad certificate" time="2020-12-28T16:45:40.055789372Z" level=info msg="Cluster-Http-Server 2020/12/28 16:45:40 http: TLS handshake error from 192.168.1.142:37422: remote error: tls: bad certificate" time="2020-12-28T16:45:41.713707318Z" level=info msg="certificate CN=rasp-2 signed by CN=k3s-server-ca@1609173832: notBefore=2020-12-28 16:43:52 +0000 UTC notAfter=2021-12-28 16:45:41 +0000 UTC" time="2020-12-28T16:45:41.743388450Z" level=info msg="certificate CN=rasp-3 signed by CN=k3s-server-ca@1609173832: notBefore=2020-12-28 16:43:52 +0000 UTC notAfter=2021-12-28 16:45:41 +0000 UTC" time="2020-12-28T16:45:43.253503284Z" level=info msg="certificate CN=system:node:rasp-3,O=system:nodes signed by CN=k3s-client-ca@1609173832: notBefore=2020-12-28 16:43:52 +0000 UTC notAfter=2021-12-28 16:45:43 +0000 UTC" time="2020-12-28T16:45:43.284712214Z" level=info msg="certificate CN=system:node:rasp-2,O=system:nodes signed by CN=k3s-client-ca@1609173832: notBefore=2020-12-28 16:43:52 +0000 UTC notAfter=2021-12-28 16:45:43 +0000 UTC" time="2020-12-28T16:45:46.542859941Z" level=info msg="Handling backend connection request [rasp-2]" time="2020-12-28T16:45:46.555078474Z" level=info msg="Handling backend connection request [rasp-3]" I1228 16:45:52.245286 2850 node_controller.go:390] Initializing node rasp-3 with cloud provider time="2020-12-28T16:45:52.247851025Z" level=info msg="Couldn't find node internal ip label on node rasp-3" time="2020-12-28T16:45:52.248052742Z" level=info msg="Couldn't find node hostname label on node rasp-3" W1228 16:45:52.248155 2850 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="rasp-3" does not exist W1228 16:45:52.308414 2850 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="rasp-2" does not exist I1228 16:45:52.603117 2850 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io I1228 16:45:52.638851 2850 range_allocator.go:373] Set node rasp-2 PodCIDR to [10.42.1.0/24] I1228 16:45:52.644275 2850 node_controller.go:454] Successfully initialized node rasp-3 with cloud provider I1228 16:45:52.644582 2850 node_controller.go:390] Initializing node rasp-2 with cloud provider time="2020-12-28T16:45:52.644742823Z" level=info msg="Couldn't find node internal ip label on node rasp-2" time="2020-12-28T16:45:52.644895686Z" level=info msg="Couldn't find node hostname label on node rasp-2" I1228 16:45:52.650128 2850 event.go:291] "Event occurred" object="rasp-3" kind="Node" apiVersion="v1" type="Normal" reason="Synced" message="Node synced successfully" time="2020-12-28T16:45:52.693038886Z" level=info msg="Updated coredns node hosts entry [192.168.1.142 rasp-3]" I1228 16:45:52.728874 2850 range_allocator.go:373] Set node rasp-3 PodCIDR to [10.42.0.0/24] time="2020-12-28T16:45:52.994699586Z" level=info msg="Updated coredns node hosts entry [192.168.1.120 rasp-2]" I1228 16:45:53.251592 2850 node_controller.go:454] Successfully initialized node rasp-2 with cloud provider I1228 16:45:53.251934 2850 node_controller.go:390] Initializing node rasp-2 with cloud provider time="2020-12-28T16:45:53.252142517Z" level=info msg="Couldn't find node internal ip label on node rasp-2" time="2020-12-28T16:45:53.252361681Z" level=info msg="Couldn't find node hostname label on node rasp-2" I1228 16:45:53.281934 2850 event.go:291] "Event occurred" object="rasp-2" kind="Node" apiVersion="v1" type="Normal" reason="Synced" message="Node synced successfully" I1228 16:45:53.634557 2850 node_controller.go:454] Successfully initialized node rasp-2 with cloud provider I1228 16:45:53.635138 2850 event.go:291] "Event occurred" object="rasp-2" kind="Node" apiVersion="v1" type="Normal" reason="Synced" message="Node synced successfully" I1228 16:45:56.085596 2850 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: I1228 16:45:56.085887 2850 event.go:291] "Event occurred" object="rasp-3" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node rasp-3 event: Registered Node rasp-3 in Controller" I1228 16:45:56.086049 2850 event.go:291] "Event occurred" object="rasp-2" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node rasp-2 event: Registered Node rasp-2 in Controller" W1228 16:45:56.086105 2850 node_lifecycle_controller.go:1044] Missing timestamp for Node rasp-3. Assuming now as a timestamp. W1228 16:45:56.086451 2850 node_lifecycle_controller.go:1044] Missing timestamp for Node rasp-2. Assuming now as a timestamp. I1228 16:45:56.086662 2850 node_lifecycle_controller.go:1245] Controller detected that zone is now in state Normal. E1228 16:46:04.369824 2850 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request I1228 16:46:06.526308 2850 event.go:291] "Event occurred" object="kube-system/local-path-provisioner-7c458769fb-sssm6" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/local-path-provisioner-7c458769fb-sssm6" W1228 16:46:08.394308 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:46:08.394568 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:46:08.394648 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. I1228 16:46:08.595841 2850 request.go:655] Throttling request took 1.041617032s, request: GET:https://127.0.0.1:6444/apis/batch/v1beta1?timeout=32s W1228 16:46:09.553558 2850 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] I1228 16:46:23.402139 2850 event.go:291] "Event occurred" object="kube-system/helm-install-traefik" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: helm-install-traefik-g8tqb" E1228 16:46:23.530355 2850 job_controller.go:402] Error syncing job: failed pod(s) detected for job key "kube-system/helm-install-traefik" E1228 16:46:23.583184 2850 job_controller.go:402] Error syncing job: failed pod(s) detected for job key "kube-system/helm-install-traefik" E1228 16:46:35.094992 2850 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request I1228 16:46:41.300241 2850 request.go:655] Throttling request took 1.04134417s, request: GET:https://127.0.0.1:6444/apis/batch/v1beta1?timeout=32s W1228 16:46:42.258277 2850 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] E1228 16:46:55.320869 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) W1228 16:46:56.325444 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:46:56.325736 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:46:56.325847 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E1228 16:47:00.324997 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) E1228 16:47:05.327161 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) E1228 16:47:05.822355 2850 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1228 16:47:10.369491 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": context deadline exceeded (Client.Timeout exceeded while awaiting headers) W1228 16:47:11.374817 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:47:11.375096 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:47:11.375175 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. I1228 16:47:14.225289 2850 request.go:655] Throttling request took 1.039999182s, request: GET:https://127.0.0.1:6444/apis/coordination.k8s.io/v1?timeout=32s W1228 16:47:15.183770 2850 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] E1228 16:47:15.393973 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again E1228 16:47:20.426829 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) W1228 16:47:21.432909 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:47:21.433181 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:47:21.433302 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E1228 16:47:25.462143 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again E1228 16:47:30.482756 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": context deadline exceeded W1228 16:47:31.487354 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:47:31.487648 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:47:31.487726 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E1228 16:47:35.504666 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again E1228 16:47:36.432031 2850 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1228 16:47:40.536754 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) W1228 16:47:41.541782 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:47:41.542149 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:47:41.542297 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E1228 16:47:46.205334 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again I1228 16:47:47.892539 2850 request.go:655] Throttling request took 1.040489762s, request: GET:https://127.0.0.1:6444/apis/networking.k8s.io/v1beta1?timeout=32s W1228 16:47:48.850844 2850 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] E1228 16:47:51.232627 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": context deadline exceeded W1228 16:47:52.237284 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:47:52.237693 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:47:52.237794 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E1228 16:47:57.293522 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.43.107.192:443: i/o timeout W1228 16:47:58.300617 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:47:58.300908 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:47:58.300988 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E1228 16:48:02.298440 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": context deadline exceeded E1228 16:48:08.206608 2850 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1228 16:48:08.266304 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) W1228 16:48:09.268943 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:48:09.269282 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:48:09.269364 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E1228 16:48:13.270858 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) E1228 16:48:18.273467 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) I1228 16:48:21.210121 2850 request.go:655] Throttling request took 1.04156605s, request: GET:https://127.0.0.1:6444/apis/authorization.k8s.io/v1?timeout=32s W1228 16:48:22.168387 2850 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] E1228 16:48:23.276417 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) E1228 16:48:38.997352 2850 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1228 16:48:41.868589 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) E1228 16:48:53.300683 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I1228 16:48:54.278816 2850 request.go:655] Throttling request took 1.041157878s, request: GET:https://127.0.0.1:6444/apis/apiextensions.k8s.io/v1?timeout=32s W1228 16:48:54.304909 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:48:54.305172 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:48:54.305361 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. W1228 16:48:55.236817 2850 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] E1228 16:48:58.337510 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": context deadline exceeded W1228 16:48:59.340316 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:48:59.340605 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:48:59.340691 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E1228 16:49:03.739128 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": context deadline exceeded (Client.Timeout exceeded while awaiting headers) W1228 16:49:04.743673 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:49:04.743957 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:49:04.744037 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E1228 16:49:08.767047 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": context deadline exceeded E1228 16:49:09.694302 2850 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request W1228 16:49:09.771966 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:49:09.772281 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:49:09.772378 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E1228 16:49:14.855511 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again E1228 16:49:19.885198 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) W1228 16:49:20.889387 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:49:20.889659 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:49:20.889750 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E1228 16:49:24.889841 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) I1228 16:49:27.789667 2850 request.go:655] Throttling request took 1.041652563s, request: GET:https://127.0.0.1:6444/apis/autoscaling/v1?timeout=32s W1228 16:49:28.748287 2850 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] E1228 16:49:29.891766 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) I1228 16:49:37.141319 2850 trace.go:205] Trace[188756419]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:k3s/v1.20.0+k3s2 (linux/arm64) kubernetes/2ea6b16/system:serviceaccount:kube-system:job-controller,client:127.0.0.1 (28-Dec-2020 16:49:36.043) (total time: 1097ms): Trace[188756419]: ---"Object stored in database" 1096ms (16:49:00.140) Trace[188756419]: [1.097486466s] [1.097486466s] END I1228 16:49:37.159397 2850 event.go:291] "Event occurred" object="kube-system/helm-install-traefik" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: helm-install-traefik-g2kx8" E1228 16:49:37.255845 2850 job_controller.go:402] Error syncing job: failed pod(s) detected for job key "kube-system/helm-install-traefik" E1228 16:49:37.310069 2850 job_controller.go:402] Error syncing job: failed pod(s) detected for job key "kube-system/helm-install-traefik" W1228 16:49:37.873106 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:49:37.873417 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:49:37.873508 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E1228 16:49:40.326324 2850 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1228 16:49:41.895968 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": context deadline exceeded W1228 16:49:42.918404 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:49:42.918728 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:49:42.918888 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E1228 16:49:47.050963 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) W1228 16:49:48.056089 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:49:48.056509 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:49:48.056591 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E1228 16:49:52.055750 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) E1228 16:49:59.918301 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": context deadline exceeded I1228 16:50:00.858756 2850 request.go:655] Throttling request took 1.041550638s, request: GET:https://127.0.0.1:6444/apis/batch/v1?timeout=32s W1228 16:50:00.922272 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:50:00.922529 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:50:00.922608 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. W1228 16:50:01.816750 2850 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] E1228 16:50:04.954813 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) W1228 16:50:05.959183 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:50:05.959510 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:50:05.959590 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E1228 16:50:10.426901 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.43.107.192:443: i/o timeout (Client.Timeout exceeded while awaiting headers) E1228 16:50:10.995645 2850 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request W1228 16:50:11.432606 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:50:11.432889 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:50:11.432973 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E1228 16:50:15.429906 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) E1228 16:50:21.606909 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) W1228 16:50:22.608920 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:50:22.609253 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:50:22.609336 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E1228 16:50:26.612564 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) I1228 16:50:34.569278 2850 request.go:655] Throttling request took 1.041575013s, request: GET:https://127.0.0.1:6444/apis/events.k8s.io/v1beta1?timeout=32s E1228 16:50:34.961438 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": context deadline exceeded (Client.Timeout exceeded while awaiting headers) W1228 16:50:35.527295 2850 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] W1228 16:50:35.965118 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:50:35.965448 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:50:35.965537 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E1228 16:50:39.988548 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) W1228 16:50:40.994258 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:50:40.994553 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:50:40.994634 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E1228 16:50:41.724514 2850 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1228 16:50:45.017302 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again E1228 16:50:50.019152 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) I1228 16:51:07.993619 2850 request.go:655] Throttling request took 1.040743666s, request: GET:https://127.0.0.1:6444/apis/scheduling.k8s.io/v1beta1?timeout=32s W1228 16:51:08.951630 2850 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] E1228 16:51:09.963675 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) E1228 16:51:12.377947 2850 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1228 16:51:14.965966 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) W1228 16:51:40.995244 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:51:40.995523 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:51:40.995649 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. I1228 16:51:41.147547 2850 request.go:655] Throttling request took 1.041246718s, request: GET:https://127.0.0.1:6444/apis/policy/v1beta1?timeout=32s E1228 16:51:41.872902 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) W1228 16:51:42.105137 2850 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] E1228 16:51:42.986664 2850 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1228 16:51:46.874687 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) E1228 16:52:12.792334 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": context deadline exceeded E1228 16:52:13.594490 2850 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request W1228 16:52:13.797200 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:52:13.797645 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:52:13.797728 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. I1228 16:52:14.482616 2850 request.go:655] Throttling request took 1.040987189s, request: GET:https://127.0.0.1:6444/apis/scheduling.k8s.io/v1?timeout=32s W1228 16:52:15.440794 2850 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] E1228 16:52:17.795823 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) E1228 16:52:22.798779 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": context deadline exceeded E1228 16:52:41.894686 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) W1228 16:52:42.898900 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:52:42.899173 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:52:42.899391 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E1228 16:52:44.425169 2850 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1228 16:52:46.898182 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) I1228 16:52:47.889147 2850 request.go:655] Throttling request took 1.041198541s, request: GET:https://127.0.0.1:6444/apis/extensions/v1beta1?timeout=32s W1228 16:52:48.847453 2850 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] E1228 16:52:51.899834 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) E1228 16:53:11.875475 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) E1228 16:53:15.167150 2850 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request I1228 16:53:21.207036 2850 request.go:655] Throttling request took 1.041486559s, request: GET:https://127.0.0.1:6444/apis/autoscaling/v1?timeout=32s E1228 16:53:21.921191 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.43.107.192:443: i/o timeout W1228 16:53:22.164937 2850 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] W1228 16:53:22.925706 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:53:22.925982 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:53:22.926106 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E1228 16:53:26.953657 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) W1228 16:53:27.956495 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:53:27.956780 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:53:27.956860 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E1228 16:53:31.956820 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) E1228 16:53:42.023181 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": context deadline exceeded W1228 16:53:43.034060 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:53:43.034399 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:53:43.034483 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E1228 16:53:45.830133 2850 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1228 16:53:47.033472 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": context deadline exceeded I1228 16:53:54.684589 2850 request.go:655] Throttling request took 1.041027137s, request: GET:https://127.0.0.1:6444/apis/authorization.k8s.io/v1?timeout=32s W1228 16:53:55.643069 2850 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] E1228 16:53:56.960849 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) W1228 16:53:57.965540 2850 handler_proxy.go:102] no RequestInfo found in the context E1228 16:53:57.965951 2850 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1228 16:53:57.966032 2850 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E1228 16:54:01.965006 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) E1228 16:54:11.875041 2850 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.107.192:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) E1228 16:54:16.440380 2850 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
Updated log from the agent:
time="2020-12-28T16:45:06Z" level=info msg="Acquiring lock file /var/lib/rancher/k3s/data/.lock" time="2020-12-28T16:45:06Z" level=info msg="Preparing data dir /var/lib/rancher/k3s/data/477a397d5ed8de53c8cee46d5f565f81e93de8dadbff870dc27452b47238f6f5" time="2020-12-28T16:45:39.153495442Z" level=info msg="Starting k3s agent v1.20.0+k3s2 (2ea6b163)" time="2020-12-28T16:45:39.154820371Z" level=info msg="Module overlay was already loaded" time="2020-12-28T16:45:39.444577277Z" level=info msg="Running load balancer 127.0.0.1:46415 -> [192.168.1.143:6443]" time="2020-12-28T16:45:43.369718337Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log" time="2020-12-28T16:45:43.370248695Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd" time="2020-12-28T16:45:44.373526761Z" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory\"" time="2020-12-28T16:45:45.375559531Z" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory\"" time="2020-12-28T16:45:46.387484303Z" level=info msg="Containerd is now running" time="2020-12-28T16:45:46.503215182Z" level=info msg="Connecting to proxy" url="wss://192.168.1.143:6443/v1-k3s/connect" time="2020-12-28T16:45:46.546842038Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us" time="2020-12-28T16:45:46.547443124Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/477a397d5ed8de53c8cee46d5f565f81e93de8dadbff870dc27452b47238f6f5/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=unix:///run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=rasp-2 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/etc/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key" time="2020-12-28T16:45:46.553416220Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=rasp-2 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables" Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet. Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed. W1228 16:45:46.556081 2869 server.go:226] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP. I1228 16:45:46.560361 2869 server.go:412] Version: v1.20.0+k3s2 time="2020-12-28T16:45:46.745465960Z" level=info msg="Waiting for node rasp-2: nodes \"rasp-2\" not found" I1228 16:45:46.934701 2869 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt E1228 16:45:47.012031 2869 node.go:161] Failed to retrieve node info: nodes "rasp-2" not found E1228 16:45:48.194400 2869 node.go:161] Failed to retrieve node info: nodes "rasp-2" not found time="2020-12-28T16:45:48.821689429Z" level=info msg="Waiting for node rasp-2: nodes \"rasp-2\" not found" E1228 16:45:50.427564 2869 node.go:161] Failed to retrieve node info: nodes "rasp-2" not found time="2020-12-28T16:45:50.922539647Z" level=info msg="Waiting for node rasp-2: nodes \"rasp-2\" not found" W1228 16:45:51.945475 2869 nvidia.go:81] Error reading "/sys/bus/pci/devices/": open /sys/bus/pci/devices/: no such file or directory W1228 16:45:51.951828 2869 sysinfo.go:203] Nodes topology is not available, providing CPU topology W1228 16:45:51.953267 2869 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory W1228 16:45:51.953495 2869 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu1/online: open /sys/devices/system/cpu/cpu1/online: no such file or directory W1228 16:45:51.953679 2869 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu2/online: open /sys/devices/system/cpu/cpu2/online: no such file or directory W1228 16:45:51.953882 2869 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu3/online: open /sys/devices/system/cpu/cpu3/online: no such file or directory E1228 16:45:51.954274 2869 info.go:114] Failed to get system UUID: open /etc/machine-id: no such file or directory W1228 16:45:51.955213 2869 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu0 online state, skipping W1228 16:45:51.955366 2869 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu1 online state, skipping W1228 16:45:51.955498 2869 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu2 online state, skipping W1228 16:45:51.955629 2869 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu3 online state, skipping E1228 16:45:51.955742 2869 machine.go:72] Cannot read number of physical cores correctly, number of cores set to 0 W1228 16:45:51.956476 2869 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu0 online state, skipping W1228 16:45:51.956610 2869 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu1 online state, skipping W1228 16:45:51.956736 2869 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu2 online state, skipping W1228 16:45:51.956872 2869 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu3 online state, skipping E1228 16:45:51.956916 2869 machine.go:86] Cannot read number of sockets correctly, number of sockets set to 0 W1228 16:45:51.957074 2869 info.go:53] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id" I1228 16:45:51.959966 2869 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / I1228 16:45:51.961801 2869 container_manager_linux.go:287] container manager verified user specified cgroup-root exists: [] I1228 16:45:51.961917 2869 container_manager_linux.go:292] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:Percentage:0.05} GracePeriod:0s MinReclaim: } {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim: }]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:false CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none Rootless:false} I1228 16:45:51.962506 2869 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope I1228 16:45:51.962628 2869 container_manager_linux.go:323] [topologymanager] Initializing Topology Manager with none policy and container-level scope I1228 16:45:51.962686 2869 container_manager_linux.go:328] Creating device plugin manager: true I1228 16:45:51.964075 2869 kubelet.go:262] Adding pod path: /var/lib/rancher/k3s/agent/pod-manifests I1228 16:45:51.964405 2869 kubelet.go:273] Watching apiserver I1228 16:45:51.972725 2869 kuberuntime_manager.go:216] Container runtime containerd initialized, version: v1.4.3-k3s1, apiVersion: v1alpha2 I1228 16:45:51.994305 2869 server.go:1177] Started kubelet E1228 16:45:52.005888 2869 cri_stats_provider.go:376] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.native": unable to find data in memory cache. E1228 16:45:52.006059 2869 kubelet.go:1271] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem I1228 16:45:52.010871 2869 server.go:148] Starting to listen on 0.0.0.0:10250 I1228 16:45:52.019109 2869 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer I1228 16:45:52.019741 2869 server.go:409] Adding debug handlers to kubelet server. I1228 16:45:52.033730 2869 volume_manager.go:271] Starting Kubelet Volume Manager I1228 16:45:52.036948 2869 desired_state_of_world_populator.go:142] Desired state populator starts to run I1228 16:45:52.128235 2869 cpu_manager.go:193] [cpumanager] starting with none policy I1228 16:45:52.128341 2869 cpu_manager.go:194] [cpumanager] reconciling every 10s I1228 16:45:52.128558 2869 state_mem.go:36] [cpumanager] initializing new in-memory state store I1228 16:45:52.129738 2869 policy_none.go:43] [cpumanager] none policy: Start E1228 16:45:52.134391 2869 kubelet.go:2240] node "rasp-2" not found I1228 16:45:52.141248 2869 kubelet_node_status.go:71] Attempting to register node rasp-2 W1228 16:45:52.222537 2869 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found I1228 16:45:52.226421 2869 plugin_manager.go:114] Starting Kubelet Plugin Manager E1228 16:45:52.232730 2869 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: node "rasp-2" not found E1228 16:45:52.235188 2869 kubelet.go:2240] node "rasp-2" not found I1228 16:45:52.298046 2869 kubelet_node_status.go:74] Successfully registered node rasp-2 I1228 16:45:52.636653 2869 kuberuntime_manager.go:1006] updating runtime config through cri with podcidr 10.42.1.0/24 I1228 16:45:52.639242 2869 kubelet_network.go:77] Setting Pod CIDR: -> 10.42.1.0/24 I1228 16:45:52.774064 2869 kubelet_network_linux.go:56] Initialized IPv4 iptables rules. I1228 16:45:52.774391 2869 status_manager.go:158] Starting to sync pod status with apiserver I1228 16:45:52.774523 2869 kubelet.go:1799] Starting kubelet main sync loop. E1228 16:45:52.774760 2869 kubelet.go:1823] skipping pod synchronization - PLEG is not healthy: pleg has yet to be successful I1228 16:45:52.941504 2869 reconciler.go:157] Reconciler: start to sync state time="2020-12-28T16:45:53.049929297Z" level=info msg="Node CIDR assigned for: rasp-2" I1228 16:45:53.050403 2869 flannel.go:92] Determining IP address of default interface I1228 16:45:53.052907 2869 flannel.go:105] Using interface with name wlan0 and address 192.168.1.120 I1228 16:45:53.076282 2869 kube.go:117] Waiting 10m0s for node controller to sync I1228 16:45:53.076369 2869 kube.go:300] Starting kube subnet manager time="2020-12-28T16:45:53.349078159Z" level=info msg="Failed to update node rasp-2: nodes \"rasp-2\" is forbidden: node \"rasp-2\" is not allowed to modify taints" I1228 16:45:54.076779 2869 kube.go:124] Node controller sync successful I1228 16:45:54.077306 2869 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false I1228 16:45:54.361523 2869 flannel.go:78] Wrote subnet file to /run/flannel/subnet.env I1228 16:45:54.361631 2869 flannel.go:82] Running backend. I1228 16:45:54.361683 2869 vxlan_network.go:60] watching for new subnet leases I1228 16:45:54.370405 2869 iptables.go:145] Some iptables rules are missing; deleting and recreating rules I1228 16:45:54.370521 2869 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN I1228 16:45:54.372916 2869 iptables.go:145] Some iptables rules are missing; deleting and recreating rules I1228 16:45:54.373027 2869 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -j ACCEPT I1228 16:45:54.374734 2869 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully I1228 16:45:54.377400 2869 iptables.go:167] Deleting iptables rule: -d 10.42.0.0/16 -j ACCEPT I1228 16:45:54.379739 2869 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.1.0/24 -j RETURN I1228 16:45:54.381810 2869 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -j ACCEPT I1228 16:45:54.384358 2869 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully I1228 16:45:54.389062 2869 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN I1228 16:45:54.392483 2869 iptables.go:155] Adding iptables rule: -d 10.42.0.0/16 -j ACCEPT I1228 16:45:54.402623 2869 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully I1228 16:45:54.412253 2869 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.1.0/24 -j RETURN I1228 16:45:54.420667 2869 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully time="2020-12-28T16:45:54.460312574Z" level=info msg="labels have been set successfully on node: rasp-2" I1228 16:45:55.440974 2869 node.go:172] Successfully retrieved node IP: 192.168.1.120 I1228 16:45:55.441255 2869 server_others.go:143] kube-proxy node IP is an IPv4 address (192.168.1.120), assume IPv4 operation I1228 16:45:55.446258 2869 server_others.go:186] Using iptables Proxier. I1228 16:45:55.448384 2869 server.go:650] Version: v1.20.0+k3s2 I1228 16:45:55.452282 2869 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 I1228 16:45:55.452816 2869 conntrack.go:52] Setting nf_conntrack_max to 131072 I1228 16:45:55.454875 2869 conntrack.go:83] Setting conntrack hashsize to 32768 I1228 16:45:55.462752 2869 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I1228 16:45:55.463394 2869 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I1228 16:45:55.464437 2869 config.go:224] Starting endpoint slice config controller I1228 16:45:55.464540 2869 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I1228 16:45:55.464779 2869 config.go:315] Starting service config controller I1228 16:45:55.464828 2869 shared_informer.go:240] Waiting for caches to sync for service config I1228 16:45:55.564923 2869 shared_informer.go:247] Caches are synced for endpoint slice config I1228 16:45:55.565199 2869 shared_informer.go:247] Caches are synced for service config I1228 16:45:55.574420 2869 network_policy_controller.go:149] Starting network policy controller I1228 16:46:03.739720 2869 topology_manager.go:187] [topologymanager] Topology Admit Handler I1228 16:46:03.768288 2869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "local-path-provisioner-service-account-token-wnmth" (UniqueName: "kubernetes.io/secret/36077827-f878-46c6-9474-575c48df3f94-local-path-provisioner-service-account-token-wnmth") pod "local-path-provisioner-7c458769fb-sssm6" (UID: "36077827-f878-46c6-9474-575c48df3f94") I1228 16:46:03.768568 2869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/36077827-f878-46c6-9474-575c48df3f94-config-volume") pod "local-path-provisioner-7c458769fb-sssm6" (UID: "36077827-f878-46c6-9474-575c48df3f94") I1228 16:46:23.450374 2869 topology_manager.go:187] [topologymanager] Topology Admit Handler I1228 16:46:23.545229 2869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "values" (UniqueName: "kubernetes.io/configmap/897177e6-03db-4a81-b7b1-177d6d949819-values") pod "helm-install-traefik-g8tqb" (UID: "897177e6-03db-4a81-b7b1-177d6d949819") I1228 16:46:23.546250 2869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "content" (UniqueName: "kubernetes.io/configmap/897177e6-03db-4a81-b7b1-177d6d949819-content") pod "helm-install-traefik-g8tqb" (UID: "897177e6-03db-4a81-b7b1-177d6d949819") I1228 16:46:23.547237 2869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "helm-traefik-token-fvjsz" (UniqueName: "kubernetes.io/secret/897177e6-03db-4a81-b7b1-177d6d949819-helm-traefik-token-fvjsz") pod "helm-install-traefik-g8tqb" (UID: "897177e6-03db-4a81-b7b1-177d6d949819") E1228 16:46:47.176310 2869 remote_image.go:113] PullImage "rancher/klipper-helm:v0.3.2" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/rancher/klipper-helm:v0.3.2": failed to extract layer sha256:d870250d9815ed31516d3a031055aa9a05349fe9f15a182dea54a59916ac563b: mount callback failed on /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount571813358: write /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount571813358/usr/bin/helm_v3: no space left on device: unknown E1228 16:46:47.176539 2869 kuberuntime_image.go:51] Pull image "rancher/klipper-helm:v0.3.2" failed: rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/rancher/klipper-helm:v0.3.2": failed to extract layer sha256:d870250d9815ed31516d3a031055aa9a05349fe9f15a182dea54a59916ac563b: mount callback failed on /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount571813358: write /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount571813358/usr/bin/helm_v3: no space left on device: unknown E1228 16:46:47.177385 2869 kuberuntime_manager.go:829] container &Container{Name:helm,Image:rancher/klipper-helm:v0.3.2,Command:[],Args:[install],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NAME,Value:traefik,ValueFrom:nil,},EnvVar{Name:VERSION,Value:,ValueFrom:nil,},EnvVar{Name:REPO,Value:,ValueFrom:nil,},EnvVar{Name:HELM_DRIVER,Value:secret,ValueFrom:nil,},EnvVar{Name:CHART_NAMESPACE,Value:kube-system,ValueFrom:nil,},EnvVar{Name:CHART,Value:https://%{KUBERNETES_API}%/static/charts/traefik-1.81.0.tgz,ValueFrom:nil,},EnvVar{Name:HELM_VERSION,Value:,ValueFrom:nil,},EnvVar{Name:NO_PROXY,Value:.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:values,ReadOnly:false,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:content,ReadOnly:false,MountPath:/chart,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:helm-traefik-token-fvjsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod helm-install-traefik-g8tqb_kube-system(897177e6-03db-4a81-b7b1-177d6d949819): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/rancher/klipper-helm:v0.3.2": failed to extract layer sha256:d870250d9815ed31516d3a031055aa9a05349fe9f15a182dea54a59916ac563b: mount callback failed on /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount57181E1228 16:46:48.064895 2869 pod_workers.go:191] Error syncing pod 897177e6-03db-4a81-b7b1-177d6d949819 ("helm-install-traefik-g8tqb_kube-system(897177e6-03db-4a81-b7b1-177d6d949819)"), skipping: failed to "StartContainer" for "helm" with ImagePullBackOff: "Back-off pulling image \"rancher/klipper-helm:v0.3.2\"" E1228 16:47:22.630019 2869 remote_image.go:113] PullImage "rancher/klipper-helm:v0.3.2" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/rancher/klipper-helm:v0.3.2": failed to extract layer sha256:d870250d9815ed31516d3a031055aa9a05349fe9f15a182dea54a59916ac563b: mount callback failed on /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount589282692: write /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount589282692/usr/bin/helm_v3: no space left on device: unknown E1228 16:47:22.630301 2869 kuberuntime_image.go:51] Pull image "rancher/klipper-helm:v0.3.2" failed: rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/rancher/klipper-helm:v0.3.2": failed to extract layer sha256:d870250d9815ed31516d3a031055aa9a05349fe9f15a182dea54a59916ac563b: mount callback failed on /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount589282692: write /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount589282692/usr/bin/helm_v3: no space left on device: unknown E1228 16:47:22.631249 2869 kuberuntime_manager.go:829] container &Container{Name:helm,Image:rancher/klipper-helm:v0.3.2,Command:[],Args:[install],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NAME,Value:traefik,ValueFrom:nil,},EnvVar{Name:VERSION,Value:,ValueFrom:nil,},EnvVar{Name:REPO,Value:,ValueFrom:nil,},EnvVar{Name:HELM_DRIVER,Value:secret,ValueFrom:nil,},EnvVar{Name:CHART_NAMESPACE,Value:kube-system,ValueFrom:nil,},EnvVar{Name:CHART,Value:https://%{KUBERNETES_API}%/static/charts/traefik-1.81.0.tgz,ValueFrom:nil,},EnvVar{Name:HELM_VERSION,Value:,ValueFrom:nil,},EnvVar{Name:NO_PROXY,Value:.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:values,ReadOnly:false,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:content,ReadOnly:false,MountPath:/chart,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:helm-traefik-token-fvjsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod helm-install-traefik-g8tqb_kube-system(897177e6-03db-4a81-b7b1-177d6d949819): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/rancher/klipper-helm:v0.3.2": failed to extract layer sha256:d870250d9815ed31516d3a031055aa9a05349fe9f15a182dea54a59916ac563b: mount callback failed on /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount589282692: write /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount589282692/usr/bin/helm_v3: no space left on device: unknown E1228 16:47:22.631588 2869 pod_workers.go:191] Error syncing pod 897177e6-03db-4a81-b7b1-177d6d949819 ("helm-install-traefik-g8tqb_kube-system(897177e6-03db-4a81-b7b1-177d6d949819)"), skipping: failed to "StartContainer" for "helm" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/rancher/klipper-helm:v0.3.2\": failed to extract layer sha256:d870250d9815ed31516d3a031055aa9a05349fe9f15a182dea54a59916ac563b: mount callback failed on /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount589282692: write /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount589282692/usr/bin/helm_v3: no space left on device: unknown" E1228 16:47:33.779641 2869 pod_workers.go:191] Error syncing pod 897177e6-03db-4a81-b7b1-177d6d949819 ("helm-install-traefik-g8tqb_kube-system(897177e6-03db-4a81-b7b1-177d6d949819)"), skipping: failed to "StartContainer" for "helm" with ImagePullBackOff: "Back-off pulling image \"rancher/klipper-helm:v0.3.2\"" E1228 16:48:09.073659 2869 remote_image.go:113] PullImage "rancher/klipper-helm:v0.3.2" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/rancher/klipper-helm:v0.3.2": failed to extract layer sha256:d870250d9815ed31516d3a031055aa9a05349fe9f15a182dea54a59916ac563b: mount callback failed on /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount670747882: write /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount670747882/usr/bin/helm_v3: no space left on device: unknown E1228 16:48:09.073859 2869 kuberuntime_image.go:51] Pull image "rancher/klipper-helm:v0.3.2" failed: rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/rancher/klipper-helm:v0.3.2": failed to extract layer sha256:d870250d9815ed31516d3a031055aa9a05349fe9f15a182dea54a59916ac563b: mount callback failed on /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount670747882: write /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount670747882/usr/bin/helm_v3: no space left on device: unknown E1228 16:48:09.074612 2869 kuberuntime_manager.go:829] container &Container{Name:helm,Image:rancher/klipper-helm:v0.3.2,Command:[],Args:[install],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NAME,Value:traefik,ValueFrom:nil,},EnvVar{Name:VERSION,Value:,ValueFrom:nil,},EnvVar{Name:REPO,Value:,ValueFrom:nil,},EnvVar{Name:HELM_DRIVER,Value:secret,ValueFrom:nil,},EnvVar{Name:CHART_NAMESPACE,Value:kube-system,ValueFrom:nil,},EnvVar{Name:CHART,Value:https://%{KUBERNETES_API}%/static/charts/traefik-1.81.0.tgz,ValueFrom:nil,},EnvVar{Name:HELM_VERSION,Value:,ValueFrom:nil,},EnvVar{Name:NO_PROXY,Value:.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:values,ReadOnly:false,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:content,ReadOnly:false,MountPath:/chart,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:helm-traefik-token-fvjsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod helm-install-traefik-g8tqb_kube-system(897177e6-03db-4a81-b7b1-177d6d949819): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/rancher/klipper-helm:v0.3.2": failed to extract layer sha256:d870250d9815ed31516d3a031055aa9a05349fe9f15a182dea54a59916ac563b: mount callback failed on /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount670747882: write /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount670747882/usr/bin/helm_v3: no space left on device: unknown E1228 16:48:09.074907 2869 pod_workers.go:191] Error syncing pod 897177e6-03db-4a81-b7b1-177d6d949819 ("helm-install-traefik-g8tqb_kube-system(897177e6-03db-4a81-b7b1-177d6d949819)"), skipping: failed to "StartContainer" for "helm" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/rancher/klipper-helm:v0.3.2\": failed to extract layer sha256:d870250d9815ed31516d3a031055aa9a05349fe9f15a182dea54a59916ac563b: mount callback failed on /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount670747882: write /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount670747882/usr/bin/helm_v3: no space left on device: unknown" E1228 16:48:24.779749 2869 pod_workers.go:191] Error syncing pod 897177e6-03db-4a81-b7b1-177d6d949819 ("helm-install-traefik-g8tqb_kube-system(897177e6-03db-4a81-b7b1-177d6d949819)"), skipping: failed to "StartContainer" for "helm" with ImagePullBackOff: "Back-off pulling image \"rancher/klipper-helm:v0.3.2\"" E1228 16:48:38.779875 2869 pod_workers.go:191] Error syncing pod 897177e6-03db-4a81-b7b1-177d6d949819 ("helm-install-traefik-g8tqb_kube-system(897177e6-03db-4a81-b7b1-177d6d949819)"), skipping: failed to "StartContainer" for "helm" with ImagePullBackOff: "Back-off pulling image \"rancher/klipper-helm:v0.3.2\""
Most errors are very probably caused by the overlay fs I'm using, so I'm closing this issue. Thanks for the help @brandond
It is normal to see TLS Handshake errors when adding new nodes; part of the node join code validates the server certificate a couple different ways and one of these steps will generate that message.
Environmental Info: K3s Version: v1.20.0+k3s2
Node(s) CPU architecture, OS, and Version: Alpine Linux ARM64
Cluster Configuration: 1 master, 2 workers running on Raspberry Pi 3B+ with Alpine Linux
Describe the bug:
I'm no longer able to run the cluster using the config in https://github.com/k3s-io/k3s-ansible after upgrading from v1.17.5+k3s1 to v1.20.0+k3s2
Steps To Reproduce:
Expected behavior:
The worker nodes should be able to join the cluster using the node-token. I've already checked that the token is correct when running k3s agent.
Actual behavior:
The worker cannot join the cluster. Usually the TLS handshake error happens when I have the wrong
.kube/config
file but it's not needed for the workers AFAIKAdditional context / logs:
Logs from one of the nodes (hostname rasp-2, ip 192.168.120):
On the master: