k3d-io / k3d

Little helper to run CNCF's k3s in Docker
https://k3d.io/
MIT License
5.4k stars 460 forks source link

[BUG] Cluster create gets stuck while waiting for serverlb logs that exist already #621

Closed jgresty closed 3 years ago

jgresty commented 3 years ago

Possibly related to https://github.com/rancher/k3d/issues/592 , same output from cluster create command but different logs from containers.

Cluster created using the workarounds described in the faq and skipping traefik:

k3d cluster create --k3s-server-arg "--kube-proxy-arg=conntrack-max-per-core=0" --k3s-agent-arg "--kube-proxy-arg=conntrack-max-per-core=0" --image rancher/k3s:v1.20.6-k3s1 -v /dev/mapper:/dev/mapper --k3s-server-arg "--no-deploy=traefik" --trace

Creating cluster halted on

INFO[0005] Starting Node 'k3d-k3s-default-serverlb'
DEBU[0006] Waiting for node k3d-k3s-default-serverlb to get ready (Log: 'start worker processes')
TRAC[0006] NodeWaitForLogMessage: Node 'k3d-k3s-default-serverlb' waiting for log message 'start worker processes' since '2021-06-03 13:20:56.510628562 +0000 UTC'

docker logs k3d-k3s-default-serverlb

2021-06-03T13:20:56Z k3d-k3s-default-serverlb confd[8]: INFO Backend set to env
2021-06-03T13:20:56Z k3d-k3s-default-serverlb confd[8]: INFO Starting confd
2021-06-03T13:20:56Z k3d-k3s-default-serverlb confd[8]: INFO Backend source(s) set to
2021-06-03T13:20:56Z k3d-k3s-default-serverlb confd[8]: INFO /etc/nginx/nginx.conf has md5sum 9c9065b8e74f4b01f6eed5a7af1141b6 should be ab00b91435ed084fe99763a1cc04db57
2021-06-03T13:20:56Z k3d-k3s-default-serverlb confd[8]: INFO Target config /etc/nginx/nginx.conf out of sync
2021-06-03T13:20:56Z k3d-k3s-default-serverlb confd[8]: INFO Target config /etc/nginx/nginx.conf has been updated
===== Initial nginx configuration =====
error_log stderr notice;

worker_processes auto;
events {
  multi_accept on;
  use epoll;
  worker_connections 1025;
}

stream {

  #######
  # TCP #
  #######
  upstream server_nodes_6443 {
    server k3d-k3s-default-server-0:6443 max_fails=1 fail_timeout=10s;
  }

  server {
    listen        6443;
    proxy_pass    server_nodes_6443;
    proxy_timeout 600;
    proxy_connect_timeout 2s;
  }

  #######
  # UDP #
  #######
}
=======================================
2021/06/03 13:20:56 [notice] 14#14: using the "epoll" event method
2021/06/03 13:20:56 [notice] 14#14: nginx/1.19.10
2021/06/03 13:20:56 [notice] 14#14: built by gcc 10.2.1 20201203 (Alpine 10.2.1_pre1)
2021/06/03 13:20:56 [notice] 14#14: OS: Linux 5.12.8-arch1-1
2021/06/03 13:20:56 [notice] 14#14: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2021/06/03 13:20:56 [notice] 14#14: start worker processes
2021/06/03 13:20:56 [notice] 14#14: start worker process 15
2021/06/03 13:20:56 [notice] 14#14: start worker process 16
2021/06/03 13:20:56 [notice] 14#14: start worker process 17
2021/06/03 13:20:56 [notice] 14#14: start worker process 18
2021/06/03 13:20:56 [notice] 14#14: start worker process 19
2021/06/03 13:20:56 [notice] 14#14: start worker process 20
2021/06/03 13:20:56 [notice] 14#14: start worker process 21
2021/06/03 13:20:56 [notice] 14#14: start worker process 22
2021/06/03 13:20:56 [notice] 14#14: start worker process 23
2021/06/03 13:20:56 [notice] 14#14: start worker process 24
2021/06/03 13:20:56 [notice] 14#14: start worker process 25
2021/06/03 13:20:56 [notice] 14#14: start worker process 26
2021/06/03 13:20:56 [notice] 14#14: start worker process 27
2021/06/03 13:20:56 [notice] 14#14: start worker process 28
2021/06/03 13:20:56 [notice] 14#14: start worker process 29
2021/06/03 13:20:56 [notice] 14#14: start worker process 30
2021/06/03 13:20:56 [notice] 14#14: start worker process 31
2021/06/03 13:20:56 [notice] 14#14: start worker process 32
2021/06/03 13:20:56 [notice] 14#14: start worker process 33
2021/06/03 13:20:56 [notice] 14#14: start worker process 34
2021/06/03 13:20:56 [notice] 14#14: start worker process 35
2021/06/03 13:20:56 [notice] 14#14: start worker process 36
2021/06/03 13:20:56 [notice] 14#14: start worker process 37
2021/06/03 13:20:56 [notice] 14#14: start worker process 38

docker logs k3d-k3s-default-server-0

time="2021-06-03T13:20:51.891944843Z" level=info msg="Starting k3s v1.20.6+k3s1 (8d043282)"
time="2021-06-03T13:20:51.900352324Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
time="2021-06-03T13:20:51.900377103Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
time="2021-06-03T13:20:51.902372372Z" level=info msg="Database tables and indexes are up to date"
time="2021-06-03T13:20:51.902918141Z" level=info msg="Kine listening on unix://kine.sock"
time="2021-06-03T13:20:51.911833526Z" level=info msg="certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1622726451: notBefore=2021-06-03 13:20:51 +0000 UTC notAfter=2022-06-03 13:20:51 +0000 UTC"
time="2021-06-03T13:20:51.912226744Z" level=info msg="certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1622726451: notBefore=2021-06-03 13:20:51 +0000 UTC notAfter=2022-06-03 13:20:51 +0000 UTC"
time="2021-06-03T13:20:51.912600203Z" level=info msg="certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1622726451: notBefore=2021-06-03 13:20:51 +0000 UTC notAfter=2022-06-03 13:20:51 +0000 UTC"
time="2021-06-03T13:20:51.912966073Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-client-ca@1622726451: notBefore=2021-06-03 13:20:51 +0000 UTC notAfter=2022-06-03 13:20:51 +0000 UTC"
time="2021-06-03T13:20:51.913344172Z" level=info msg="certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1622726451: notBefore=2021-06-03 13:20:51 +0000 UTC notAfter=2022-06-03 13:20:51 +0000 UTC"
time="2021-06-03T13:20:51.913690782Z" level=info msg="certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1622726451: notBefore=2021-06-03 13:20:51 +0000 UTC notAfter=2022-06-03 13:20:51 +0000 UTC"
time="2021-06-03T13:20:51.914093930Z" level=info msg="certificate CN=cloud-controller-manager signed by CN=k3s-client-ca@1622726451: notBefore=2021-06-03 13:20:51 +0000 UTC notAfter=2022-06-03 13:20:51 +0000 UTC"
time="2021-06-03T13:20:51.914746983Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1622726451: notBefore=2021-06-03 13:20:51 +0000 UTC notAfter=2022-06-03 13:20:51 +0000 UTC"
time="2021-06-03T13:20:51.915291753Z" level=info msg="certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1622726451: notBefore=2021-06-03 13:20:51 +0000 UTC notAfter=2022-06-03 13:20:51 +0000 UTC"
time="2021-06-03T13:20:51.915725889Z" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1622726451: notBefore=2021-06-03 13:20:51 +0000 UTC notAfter=2022-06-03 13:20:51 +0000 UTC"
time="2021-06-03T13:20:51.915979945Z" level=info msg="certificate CN=etcd-client signed by CN=etcd-server-ca@1622726451: notBefore=2021-06-03 13:20:51 +0000 UTC notAfter=2022-06-03 13:20:51 +0000 UTC"
time="2021-06-03T13:20:51.916409591Z" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1622726451: notBefore=2021-06-03 13:20:51 +0000 UTC notAfter=2022-06-03 13:20:51 +0000 UTC"
time="2021-06-03T13:20:52.105637463Z" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1622726451: notBefore=2021-06-03 13:20:51 +0000 UTC notAfter=2022-06-03 13:20:52 +0000 UTC"
time="2021-06-03T13:20:52.105839402Z" level=info msg="Active TLS secret  (ver=) (count 8): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.21.0.2:172.21.0.2 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=D03B17AFDE02673826FBE68A6911078E08F08374]"
time="2021-06-03T13:20:52.108488405Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --feature-gates=ServiceAccountIssuerDiscovery=false --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
I0603 13:20:52.109270       7 server.go:659] external host was not specified, using 172.21.0.2
I0603 13:20:52.109435       7 server.go:196] Version: v1.20.6+k3s1
I0603 13:20:52.241259       7 shared_informer.go:240] Waiting for caches to sync for node_authorizer
I0603 13:20:52.241768       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0603 13:20:52.241775       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0603 13:20:52.242254       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0603 13:20:52.242259       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0603 13:20:52.254401       7 instance.go:289] Using reconciler: lease
I0603 13:20:52.301841       7 rest.go:131] the default service ipfamily for this cluster is: IPv4
W0603 13:20:52.477289       7 genericapiserver.go:425] Skipping API batch/v2alpha1 because it has no resources.
W0603 13:20:52.486153       7 genericapiserver.go:425] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0603 13:20:52.491440       7 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0603 13:20:52.495341       7 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0603 13:20:52.496967       7 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0603 13:20:52.499499       7 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0603 13:20:52.500617       7 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
W0603 13:20:52.502898       7 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources.
W0603 13:20:52.502904       7 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources.
I0603 13:20:52.506875       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0603 13:20:52.506884       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
time="2021-06-03T13:20:52.512496089Z" level=info msg="Waiting for API server to become available"
time="2021-06-03T13:20:52.512498899Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0"
time="2021-06-03T13:20:52.512902466Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
time="2021-06-03T13:20:52.513660874Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
time="2021-06-03T13:20:52.513704402Z" level=info msg="To join node to cluster: k3s agent -s https://172.21.0.2:6443 -t ${NODE_TOKEN}"
time="2021-06-03T13:20:52.514583893Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"
time="2021-06-03T13:20:52.514659858Z" level=info msg="Run: k3s kubectl"
time="2021-06-03T13:20:52.514743124Z" level=info msg="Module overlay was already loaded"
time="2021-06-03T13:20:52.514758503Z" level=info msg="Module nf_conntrack was already loaded"
time="2021-06-03T13:20:52.514768262Z" level=info msg="Module br_netfilter was already loaded"
time="2021-06-03T13:20:52.514778062Z" level=info msg="Module iptable_nat was already loaded"
time="2021-06-03T13:20:52.529427977Z" level=info msg="Cluster-Http-Server 2021/06/03 13:20:52 http: TLS handshake error from 127.0.0.1:44928: remote error: tls: bad certificate"
time="2021-06-03T13:20:52.531497531Z" level=info msg="Cluster-Http-Server 2021/06/03 13:20:52 http: TLS handshake error from 127.0.0.1:44934: remote error: tls: bad certificate"
time="2021-06-03T13:20:52.536702973Z" level=info msg="certificate CN=k3d-k3s-default-server-0 signed by CN=k3s-server-ca@1622726451: notBefore=2021-06-03 13:20:51 +0000 UTC notAfter=2022-06-03 13:20:52 +0000 UTC"
time="2021-06-03T13:20:52.538087536Z" level=info msg="certificate CN=system:node:k3d-k3s-default-server-0,O=system:nodes signed by CN=k3s-client-ca@1622726451: notBefore=2021-06-03 13:20:51 +0000 UTC notAfter=2022-06-03 13:20:52 +0000 UTC"
time="2021-06-03T13:20:52.574689769Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
time="2021-06-03T13:20:52.574800802Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
I0603 13:20:53.547422       7 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
I0603 13:20:53.547438       7 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
I0603 13:20:53.547531       7 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key
I0603 13:20:53.547726       7 secure_serving.go:197] Serving securely on 127.0.0.1:6444
I0603 13:20:53.547808       7 naming_controller.go:291] Starting NamingConditionController
I0603 13:20:53.547814       7 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0603 13:20:53.547823       7 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0603 13:20:53.547832       7 establishing_controller.go:76] Starting EstablishingController
I0603 13:20:53.547823       7 tlsconfig.go:240] Starting DynamicServingCertificateController
I0603 13:20:53.547847       7 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key
I0603 13:20:53.547863       7 crd_finalizer.go:266] Starting CRDFinalizer
I0603 13:20:53.547877       7 controller.go:86] Starting OpenAPI controller
I0603 13:20:53.547877       7 apf_controller.go:261] Starting API Priority and Fairness config controller
I0603 13:20:53.547809       7 customresource_discovery_controller.go:209] Starting DiscoveryController
I0603 13:20:53.547904       7 autoregister_controller.go:141] Starting autoregister controller
I0603 13:20:53.547910       7 cache.go:32] Waiting for caches to sync for autoregister controller
I0603 13:20:53.547903       7 controller.go:83] Starting OpenAPI AggregationController
I0603 13:20:53.547922       7 crdregistration_controller.go:111] Starting crd-autoregister controller
I0603 13:20:53.547927       7 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
I0603 13:20:53.548159       7 available_controller.go:475] Starting AvailableConditionController
I0603 13:20:53.548167       7 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0603 13:20:53.548185       7 apiservice_controller.go:97] Starting APIServiceRegistrationController
I0603 13:20:53.548187       7 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0603 13:20:53.548374       7 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0603 13:20:53.548405       7 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
I0603 13:20:53.548422       7 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
I0603 13:20:53.548454       7 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
time="2021-06-03T13:20:53.552191727Z" level=info msg="Waiting for cloudcontroller rbac role to be created"
I0603 13:20:53.552935       7 controller.go:609] quota admission added evaluator for: namespaces
E0603 13:20:53.557481       7 controller.go:151] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.43.0.1"}: failed to allocated ip:10.43.0.1 with error:cannot allocate resources of type serviceipallocations at this time
E0603 13:20:53.557890       7 controller.go:156] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.21.0.2, ResourceVersion: 0, AdditionalErrorMsg:
time="2021-06-03T13:20:53.576052359Z" level=info msg="Containerd is now running"
time="2021-06-03T13:20:53.579371515Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
time="2021-06-03T13:20:53.580747409Z" level=info msg="Handling backend connection request [k3d-k3s-default-server-0]"
time="2021-06-03T13:20:53.581156326Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --runtime-cgroups=/k3s --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
time="2021-06-03T13:20:53.581649019Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
W0603 13:20:53.581942       7 server.go:226] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
I0603 13:20:53.582132       7 server.go:412] Version: v1.20.6+k3s1
W0603 13:20:53.582367       7 proxier.go:651] Failed to read file /lib/modules/5.12.8-arch1-1/modules.builtin with error open /lib/modules/5.12.8-arch1-1/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0603 13:20:53.582760       7 proxier.go:661] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0603 13:20:53.583081       7 proxier.go:661] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0603 13:20:53.583428       7 proxier.go:661] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0603 13:20:53.583689       7 proxier.go:661] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0603 13:20:53.583998       7 proxier.go:661] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
time="2021-06-03T13:20:53.584704658Z" level=info msg="Waiting for node k3d-k3s-default-server-0: nodes \"k3d-k3s-default-server-0\" not found"
E0603 13:20:53.586983       7 node.go:161] Failed to retrieve node info: nodes "k3d-k3s-default-server-0" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
I0603 13:20:53.593724       7 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt
W0603 13:20:53.593732       7 manager.go:159] Cannot detect current cgroup on cgroup v2
I0603 13:20:53.641350       7 shared_informer.go:247] Caches are synced for node_authorizer
I0603 13:20:53.647943       7 apf_controller.go:266] Running API Priority and Fairness config worker
I0603 13:20:53.647956       7 shared_informer.go:247] Caches are synced for crd-autoregister
I0603 13:20:53.647965       7 cache.go:39] Caches are synced for autoregister controller
I0603 13:20:53.648215       7 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0603 13:20:53.648215       7 cache.go:39] Caches are synced for AvailableConditionController controller
I0603 13:20:53.648435       7 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
I0603 13:20:54.547388       7 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0603 13:20:54.547401       7 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0603 13:20:54.550274       7 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
I0603 13:20:54.551947       7 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I0603 13:20:54.551960       7 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
E0603 13:20:54.603079       7 node.go:161] Failed to retrieve node info: nodes "k3d-k3s-default-server-0" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
I0603 13:20:54.738061       7 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0603 13:20:54.756764       7 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0603 13:20:54.868979       7 lease.go:233] Resetting endpoints for master service "kubernetes" to [172.21.0.2]
I0603 13:20:54.869351       7 controller.go:609] quota admission added evaluator for: endpoints
I0603 13:20:54.870714       7 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
time="2021-06-03T13:20:55.553198816Z" level=info msg="Kube API server is now running"
time="2021-06-03T13:20:55.553226854Z" level=info msg="k3s is up and running"
Flag --address has been deprecated, see --bind-address instead.
I0603 13:20:55.556099       7 controllermanager.go:176] Version: v1.20.6+k3s1
I0603 13:20:55.556325       7 deprecated_insecure_serving.go:53] Serving insecurely on 127.0.0.1:10252
time="2021-06-03T13:20:55.562066401Z" level=info msg="Creating CRD addons.k3s.cattle.io"
W0603 13:20:55.563204       7 authorization.go:47] Authorization is disabled
W0603 13:20:55.563218       7 authentication.go:40] Authentication is disabled
I0603 13:20:55.563225       7 deprecated_insecure_serving.go:51] Serving healthz insecurely on 127.0.0.1:10251
time="2021-06-03T13:20:55.564766081Z" level=info msg="Creating CRD helmcharts.helm.cattle.io"
time="2021-06-03T13:20:55.565905678Z" level=info msg="Creating CRD helmchartconfigs.helm.cattle.io"
time="2021-06-03T13:20:55.569639571Z" level=info msg="Waiting for CRD addons.k3s.cattle.io to become available"
time="2021-06-03T13:20:55.588658382Z" level=info msg="Waiting for node k3d-k3s-default-server-0: nodes \"k3d-k3s-default-server-0\" not found"
time="2021-06-03T13:20:56.071209557Z" level=info msg="Done waiting for CRD addons.k3s.cattle.io to become available"
time="2021-06-03T13:20:56.071225786Z" level=info msg="Waiting for CRD helmcharts.helm.cattle.io to become available"
time="2021-06-03T13:20:56.573844524Z" level=info msg="Done waiting for CRD helmcharts.helm.cattle.io to become available"
time="2021-06-03T13:20:56.573864543Z" level=info msg="Waiting for CRD helmchartconfigs.helm.cattle.io to become available"
E0603 13:20:56.806159       7 node.go:161] Failed to retrieve node info: nodes "k3d-k3s-default-server-0" not found
time="2021-06-03T13:20:57.075874834Z" level=info msg="Done waiting for CRD helmchartconfigs.helm.cattle.io to become available"
time="2021-06-03T13:20:57.080608161Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.81.0.tgz"
time="2021-06-03T13:20:57.080808570Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml"
time="2021-06-03T13:20:57.080868527Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml"
time="2021-06-03T13:20:57.080915254Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml"
time="2021-06-03T13:20:57.080966451Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml"
time="2021-06-03T13:20:57.081016799Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml"
time="2021-06-03T13:20:57.081068596Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml"
time="2021-06-03T13:20:57.081142182Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
time="2021-06-03T13:20:57.081188569Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml"
time="2021-06-03T13:20:57.081232327Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml"
time="2021-06-03T13:20:57.081300433Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml"
time="2021-06-03T13:20:57.081346730Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
time="2021-06-03T13:20:57.181571904Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
time="2021-06-03T13:20:57.181578573Z" level=info msg="Starting /v1, Kind=Secret controller"
time="2021-06-03T13:20:57.183382754Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
time="2021-06-03T13:20:57.183859697Z" level=info msg="Active TLS secret k3s-serving (ver=207) (count 8): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.21.0.2:172.21.0.2 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=D03B17AFDE02673826FBE68A6911078E08F08374]"
I0603 13:20:57.196651       7 controller.go:609] quota admission added evaluator for: addons.k3s.cattle.io
I0603 13:20:57.205456       7 controller.go:609] quota admission added evaluator for: serviceaccounts
I0603 13:20:57.215017       7 request.go:655] Throttling request took 1.049045199s, request: GET:https://127.0.0.1:6444/apis/coordination.k8s.io/v1beta1?timeout=32s
I0603 13:20:57.225760       7 controller.go:609] quota admission added evaluator for: deployments.apps
time="2021-06-03T13:20:57.511513708Z" level=info msg="Starting /v1, Kind=Endpoints controller"
time="2021-06-03T13:20:57.511518658Z" level=info msg="Starting /v1, Kind=Node controller"
time="2021-06-03T13:20:57.511524138Z" level=info msg="Starting /v1, Kind=Service controller"
time="2021-06-03T13:20:57.511526928Z" level=info msg="Starting /v1, Kind=Pod controller"
time="2021-06-03T13:20:57.582562076Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChartConfig controller"
time="2021-06-03T13:20:57.582563196Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller"
time="2021-06-03T13:20:57.582574525Z" level=info msg="Starting batch/v1, Kind=Job controller"
time="2021-06-03T13:20:57.592778996Z" level=info msg="Waiting for node k3d-k3s-default-server-0: nodes \"k3d-k3s-default-server-0\" not found"
time="2021-06-03T13:20:57.708945284Z" level=info msg="Cluster dns configmap has been set successfully"
W0603 13:20:58.116910       7 controllermanager.go:546] Skipping "ttl-after-finished"
W0603 13:20:58.116926       7 controllermanager.go:546] Skipping "ephemeral-volume"
I0603 13:20:58.116914       7 shared_informer.go:240] Waiting for caches to sync for tokens
I0603 13:20:58.125053       7 node_lifecycle_controller.go:380] Sending events to api server.
I0603 13:20:58.125156       7 taint_manager.go:163] Sending events to api server.
I0603 13:20:58.125198       7 node_lifecycle_controller.go:508] Controller will reconcile labels.
I0603 13:20:58.125219       7 controllermanager.go:554] Started "nodelifecycle"
W0603 13:20:58.125225       7 controllermanager.go:533] "bootstrapsigner" is disabled
W0603 13:20:58.125228       7 controllermanager.go:533] "cloud-node-lifecycle" is disabled
I0603 13:20:58.125306       7 node_lifecycle_controller.go:542] Starting node controller
I0603 13:20:58.125315       7 shared_informer.go:240] Waiting for caches to sync for taint
I0603 13:20:58.129314       7 controllermanager.go:554] Started "persistentvolume-binder"
I0603 13:20:58.129381       7 pv_controller_base.go:307] Starting persistent volume controller
I0603 13:20:58.129386       7 shared_informer.go:240] Waiting for caches to sync for persistent volume
I0603 13:20:58.132298       7 controllermanager.go:554] Started "csrcleaner"
I0603 13:20:58.132348       7 cleaner.go:82] Starting CSR cleaner controller
I0603 13:20:58.135721       7 controllermanager.go:554] Started "serviceaccount"
I0603 13:20:58.135767       7 serviceaccounts_controller.go:117] Starting service account controller
I0603 13:20:58.135773       7 shared_informer.go:240] Waiting for caches to sync for service account
I0603 13:20:58.140303       7 controllermanager.go:554] Started "disruption"
I0603 13:20:58.140355       7 disruption.go:331] Starting disruption controller
I0603 13:20:58.140359       7 shared_informer.go:240] Waiting for caches to sync for disruption
I0603 13:20:58.143742       7 controllermanager.go:554] Started "cronjob"
W0603 13:20:58.143750       7 controllermanager.go:533] "route" is disabled
I0603 13:20:58.143805       7 cronjob_controller.go:96] Starting CronJob Manager
I0603 13:20:58.147428       7 controllermanager.go:554] Started "pvc-protection"
I0603 13:20:58.147523       7 pvc_protection_controller.go:110] Starting PVC protection controller
I0603 13:20:58.147530       7 shared_informer.go:240] Waiting for caches to sync for PVC protection
I0603 13:20:58.217083       7 shared_informer.go:247] Caches are synced for tokens
time="2021-06-03T13:20:58.308308089Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
W0603 13:20:58.549833       7 handler_proxy.go:102] no RequestInfo found in the context
E0603 13:20:58.549870       7 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0603 13:20:58.549876       7 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
time="2021-06-03T13:20:58.554478954Z" level=info msg="Running cloud-controller-manager with args --profiling=false"
I0603 13:20:58.557003       7 controllermanager.go:141] Version: v1.20.6+k3s1
time="2021-06-03T13:20:58.580579762Z" level=info msg="Stopped tunnel to 127.0.0.1:6443"
time="2021-06-03T13:20:58.580600581Z" level=info msg="Connecting to proxy" url="wss://172.21.0.2:6443/v1-k3s/connect"
time="2021-06-03T13:20:58.580649148Z" level=info msg="Proxy done" err="context canceled" url="wss://127.0.0.1:6443/v1-k3s/connect"
time="2021-06-03T13:20:58.580702475Z" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"
time="2021-06-03T13:20:58.582394471Z" level=info msg="Handling backend connection request [k3d-k3s-default-server-0]"
W0603 13:20:58.616580       7 info.go:53] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
I0603 13:20:58.616808       7 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
I0603 13:20:58.616938       7 container_manager_linux.go:287] container manager verified user specified cgroup-root exists: []
I0603 13:20:58.616952       7 container_manager_linux.go:292] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/k3s SystemCgroupsName: KubeletCgroupsName:/k3s ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none Rootless:false}
I0603 13:20:58.617027       7 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope
I0603 13:20:58.617039       7 container_manager_linux.go:323] [topologymanager] Initializing Topology Manager with none policy and container-level scope
I0603 13:20:58.617046       7 container_manager_linux.go:328] Creating device plugin manager: true
I0603 13:20:58.617218       7 kubelet.go:265] Adding pod path: /var/lib/rancher/k3s/agent/pod-manifests
I0603 13:20:58.617243       7 kubelet.go:276] Watching apiserver
I0603 13:20:58.617295       7 kubelet.go:453] Kubelet client is not nil
I0603 13:20:58.617760       7 kuberuntime_manager.go:216] Container runtime containerd initialized, version: v1.4.4-k3s1, apiVersion: v1alpha2
W0603 13:20:58.617844       7 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
I0603 13:20:58.618063       7 server.go:1177] Started kubelet
I0603 13:20:58.618118       7 server.go:148] Starting to listen on 0.0.0.0:10250
E0603 13:20:58.618308       7 cri_stats_provider.go:376] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache.
E0603 13:20:58.618325       7 kubelet.go:1296] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
I0603 13:20:58.618615       7 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
I0603 13:20:58.618701       7 volume_manager.go:271] Starting Kubelet Volume Manager
I0603 13:20:58.618724       7 desired_state_of_world_populator.go:142] Desired state populator starts to run
I0603 13:20:58.618823       7 server.go:410] Adding debug handlers to kubelet server.
I0603 13:20:58.623258       7 cpu_manager.go:193] [cpumanager] starting with none policy
I0603 13:20:58.623269       7 cpu_manager.go:194] [cpumanager] reconciling every 10s
I0603 13:20:58.623282       7 state_mem.go:36] [cpumanager] initializing new in-memory state store
E0603 13:20:58.624001       7 nodelease.go:49] failed to get node "k3d-k3s-default-server-0" when trying to set owner ref to the node lease: nodes "k3d-k3s-default-server-0" not found
I0603 13:20:58.624492       7 policy_none.go:43] [cpumanager] none policy: Start
I0603 13:20:58.635277       7 kubelet_network_linux.go:56] Initialized IPv4 iptables rules.
I0603 13:20:58.635294       7 status_manager.go:158] Starting to sync pod status with apiserver
I0603 13:20:58.635302       7 kubelet.go:1833] Starting kubelet main sync loop.
E0603 13:20:58.635320       7 kubelet.go:1857] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0603 13:20:58.654589       7 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
I0603 13:20:58.654700       7 plugin_manager.go:114] Starting Kubelet Plugin Manager
E0603 13:20:58.654854       7 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: node "k3d-k3s-default-server-0" not found
E0603 13:20:58.718804       7 kubelet.go:2268] node "k3d-k3s-default-server-0" not found
I0603 13:20:58.719166       7 kubelet_node_status.go:71] Attempting to register node k3d-k3s-default-server-0
E0603 13:20:58.752215       7 resource_quota_controller.go:162] initial discovery check failure, continuing and counting on future sync update: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0603 13:20:58.752243       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for deployments.apps
I0603 13:20:58.752262       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for replicasets.apps
I0603 13:20:58.752286       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for jobs.batch
I0603 13:20:58.752316       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
I0603 13:20:58.752334       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
I0603 13:20:58.752362       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for helmchartconfigs.helm.cattle.io
I0603 13:20:58.752412       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for statefulsets.apps
I0603 13:20:58.752427       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for daemonsets.apps
I0603 13:20:58.752447       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for controllerrevisions.apps
I0603 13:20:58.752532       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for cronjobs.batch
I0603 13:20:58.752579       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for limitranges
I0603 13:20:58.752601       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
I0603 13:20:58.752622       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.extensions
I0603 13:20:58.752659       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for events.events.k8s.io
I0603 13:20:58.752684       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
I0603 13:20:58.752706       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for addons.k3s.cattle.io
I0603 13:20:58.752751       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for serviceaccounts
I0603 13:20:58.752771       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpoints
I0603 13:20:58.752788       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
I0603 13:20:58.752808       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for podtemplates
I0603 13:20:58.752822       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
I0603 13:20:58.752836       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
I0603 13:20:58.752852       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
I0603 13:20:58.752868       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for helmcharts.helm.cattle.io
I0603 13:20:58.752892       7 controllermanager.go:554] Started "resourcequota"
I0603 13:20:58.752903       7 resource_quota_controller.go:273] Starting resource quota controller
I0603 13:20:58.752909       7 shared_informer.go:240] Waiting for caches to sync for resource quota
I0603 13:20:58.752921       7 resource_quota_monitor.go:304] QuotaMonitor running
I0603 13:20:58.756170       7 controllermanager.go:554] Started "job"
I0603 13:20:58.756216       7 job_controller.go:148] Starting job controller
I0603 13:20:58.756219       7 shared_informer.go:240] Waiting for caches to sync for job
I0603 13:20:58.758980       7 controllermanager.go:554] Started "deployment"
I0603 13:20:58.759035       7 deployment_controller.go:153] Starting deployment controller
I0603 13:20:58.759039       7 shared_informer.go:240] Waiting for caches to sync for deployment
I0603 13:20:58.767454       7 controllermanager.go:554] Started "horizontalpodautoscaling"
I0603 13:20:58.767536       7 horizontal.go:169] Starting HPA controller
I0603 13:20:58.767545       7 shared_informer.go:240] Waiting for caches to sync for HPA
I0603 13:20:58.818625       7 controllermanager.go:554] Started "statefulset"
I0603 13:20:58.818647       7 stateful_set.go:146] Starting stateful set controller
I0603 13:20:58.818652       7 shared_informer.go:240] Waiting for caches to sync for stateful set
I0603 13:20:58.818763       7 reconciler.go:157] Reconciler: start to sync state
E0603 13:20:58.818880       7 kubelet.go:2268] node "k3d-k3s-default-server-0" not found
E0603 13:20:58.918927       7 kubelet.go:2268] node "k3d-k3s-default-server-0" not found
I0603 13:20:58.969215       7 controllermanager.go:554] Started "ttl"
W0603 13:20:58.969225       7 controllermanager.go:533] "tokencleaner" is disabled
W0603 13:20:58.969228       7 controllermanager.go:533] "service" is disabled
I0603 13:20:58.969252       7 ttl_controller.go:121] Starting TTL controller
I0603 13:20:58.969257       7 shared_informer.go:240] Waiting for caches to sync for TTL
E0603 13:20:59.018950       7 kubelet.go:2268] node "k3d-k3s-default-server-0" not found
I0603 13:20:59.019362       7 kubelet_node_status.go:74] Successfully registered node k3d-k3s-default-server-0
time="2021-06-03T13:20:59.022179434Z" level=info msg="Updated coredns node hosts entry [172.21.0.2 k3d-k3s-default-server-0]"
I0603 13:20:59.168279       7 garbagecollector.go:142] Starting garbage collector controller
I0603 13:20:59.168289       7 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0603 13:20:59.168301       7 graph_builder.go:289] GraphBuilder running
I0603 13:20:59.168349       7 controllermanager.go:554] Started "garbagecollector"
time="2021-06-03T13:20:59.311788821Z" level=info msg="Control-plane role label has been set successfully on node: k3d-k3s-default-server-0"
I0603 13:20:59.419209       7 controllermanager.go:554] Started "root-ca-cert-publisher"
I0603 13:20:59.419247       7 publisher.go:98] Starting root CA certificate configmap publisher
I0603 13:20:59.419254       7 shared_informer.go:240] Waiting for caches to sync for crt configmap
I0603 13:20:59.468967       7 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-serving"
I0603 13:20:59.468980       7 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
I0603 13:20:59.468994       7 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key
I0603 13:20:59.469195       7 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-client"
I0603 13:20:59.469204       7 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client
I0603 13:20:59.469215       7 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key
I0603 13:20:59.469380       7 certificate_controller.go:118] Starting certificate controller "csrsigning-kube-apiserver-client"
I0603 13:20:59.469387       7 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
I0603 13:20:59.469396       7 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key
I0603 13:20:59.469586       7 controllermanager.go:554] Started "csrsigning"
I0603 13:20:59.469617       7 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown"
I0603 13:20:59.469623       7 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
I0603 13:20:59.469638       7 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key
I0603 13:20:59.518490       7 controllermanager.go:554] Started "csrapproving"
I0603 13:20:59.518513       7 certificate_controller.go:118] Starting certificate controller "csrapproving"
I0603 13:20:59.518518       7 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving
time="2021-06-03T13:20:59.594443997Z" level=info msg="Waiting for node k3d-k3s-default-server-0 CIDR not assigned yet"
I0603 13:20:59.669018       7 controllermanager.go:554] Started "pv-protection"
I0603 13:20:59.669046       7 pv_protection_controller.go:83] Starting PV protection controller
I0603 13:20:59.669051       7 shared_informer.go:240] Waiting for caches to sync for PV protection
I0603 13:20:59.819283       7 controllermanager.go:554] Started "replicaset"
I0603 13:20:59.819327       7 replica_set.go:182] Starting replicaset controller
I0603 13:20:59.819336       7 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
I0603 13:20:59.969105       7 controllermanager.go:554] Started "daemonset"
I0603 13:20:59.969130       7 daemon_controller.go:285] Starting daemon sets controller
I0603 13:20:59.969134       7 shared_informer.go:240] Waiting for caches to sync for daemon sets
I0603 13:21:00.018510       7 node_ipam_controller.go:91] Sending events to api server.
E0603 13:21:00.851800       7 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0603 13:21:01.265056       7 controllermanager.go:391] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0603 13:21:01.266038       7 node_lifecycle_controller.go:77] Sending events to api server
I0603 13:21:01.266059       7 controllermanager.go:258] Started "cloud-node-lifecycle"
I0603 13:21:01.266888       7 node_controller.go:115] Sending events to api server.
I0603 13:21:01.266941       7 controllermanager.go:258] Started "cloud-node"
I0603 13:21:01.267021       7 node_controller.go:154] Waiting for informer caches to sync
I0603 13:21:01.367090       7 node_controller.go:390] Initializing node k3d-k3s-default-server-0 with cloud provider
time="2021-06-03T13:21:01.367105887Z" level=info msg="Couldn't find node internal ip label on node k3d-k3s-default-server-0"
time="2021-06-03T13:21:01.367126156Z" level=info msg="Couldn't find node hostname label on node k3d-k3s-default-server-0"
time="2021-06-03T13:21:01.368462592Z" level=info msg="Couldn't find node internal ip label on node k3d-k3s-default-server-0"
time="2021-06-03T13:21:01.368471441Z" level=info msg="Couldn't find node hostname label on node k3d-k3s-default-server-0"
I0603 13:21:01.370800       7 node_controller.go:454] Successfully initialized node k3d-k3s-default-server-0 with cloud provider
I0603 13:21:01.370849       7 event.go:291] "Event occurred" object="k3d-k3s-default-server-0" kind="Node" apiVersion="v1" type="Normal" reason="Synced" message="Node synced successfully"
I0603 13:21:01.588414       7 node.go:172] Successfully retrieved node IP: 172.21.0.2
I0603 13:21:01.588434       7 server_others.go:143] kube-proxy node IP is an IPv4 address (172.21.0.2), assume IPv4 operation
I0603 13:21:01.588855       7 server_others.go:186] Using iptables Proxier.
I0603 13:21:01.588987       7 server.go:650] Version: v1.20.6+k3s1
I0603 13:21:01.589179       7 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0603 13:21:01.589205       7 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0603 13:21:01.589271       7 config.go:315] Starting service config controller
I0603 13:21:01.589276       7 shared_informer.go:240] Waiting for caches to sync for service config
I0603 13:21:01.589286       7 config.go:224] Starting endpoint slice config controller
I0603 13:21:01.589289       7 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
time="2021-06-03T13:21:01.597650132Z" level=info msg="Waiting for node k3d-k3s-default-server-0 CIDR not assigned yet"
I0603 13:21:01.689362       7 shared_informer.go:247] Caches are synced for endpoint slice config
I0603 13:21:01.689365       7 shared_informer.go:247] Caches are synced for service config
W0603 13:21:01.969481       7 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
time="2021-06-03T13:21:03.599267197Z" level=info msg="Waiting for node k3d-k3s-default-server-0 CIDR not assigned yet"
time="2021-06-03T13:21:05.603313305Z" level=info msg="Waiting for node k3d-k3s-default-server-0 CIDR not assigned yet"
time="2021-06-03T13:21:07.608492992Z" level=info msg="Waiting for node k3d-k3s-default-server-0 CIDR not assigned yet"
I0603 13:21:08.634614       7 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
time="2021-06-03T13:21:09.611049139Z" level=info msg="Waiting for node k3d-k3s-default-server-0 CIDR not assigned yet"
I0603 13:21:10.028617       7 range_allocator.go:82] Sending events to api server.
I0603 13:21:10.028744       7 range_allocator.go:110] No Service CIDR provided. Skipping filtering out service addresses.
I0603 13:21:10.028754       7 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses.
I0603 13:21:10.028771       7 controllermanager.go:554] Started "nodeipam"
I0603 13:21:10.028787       7 node_ipam_controller.go:159] Starting ipam controller
I0603 13:21:10.028796       7 shared_informer.go:240] Waiting for caches to sync for node
I0603 13:21:10.033167       7 controllermanager.go:554] Started "replicationcontroller"
I0603 13:21:10.033186       7 replica_set.go:182] Starting replicationcontroller controller
I0603 13:21:10.033198       7 shared_informer.go:240] Waiting for caches to sync for ReplicationController
I0603 13:21:10.037221       7 controllermanager.go:554] Started "attachdetach"
I0603 13:21:10.037277       7 attach_detach_controller.go:328] Starting attach detach controller
I0603 13:21:10.037281       7 shared_informer.go:240] Waiting for caches to sync for attach detach
I0603 13:21:10.040312       7 controllermanager.go:554] Started "endpointslice"
I0603 13:21:10.040362       7 endpointslice_controller.go:237] Starting endpoint slice controller
I0603 13:21:10.040369       7 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
I0603 13:21:10.043354       7 controllermanager.go:554] Started "endpointslicemirroring"
I0603 13:21:10.043407       7 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller
I0603 13:21:10.043411       7 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring
I0603 13:21:10.046837       7 controllermanager.go:554] Started "podgc"
I0603 13:21:10.046847       7 gc_controller.go:89] Starting GC controller
I0603 13:21:10.046852       7 shared_informer.go:240] Waiting for caches to sync for GC
E0603 13:21:10.056214       7 namespaced_resources_deleter.go:161] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0603 13:21:10.056252       7 controllermanager.go:554] Started "namespace"
I0603 13:21:10.056260       7 namespace_controller.go:200] Starting namespace controller
I0603 13:21:10.056265       7 shared_informer.go:240] Waiting for caches to sync for namespace
I0603 13:21:10.059422       7 controllermanager.go:554] Started "persistentvolume-expander"
I0603 13:21:10.059466       7 expand_controller.go:310] Starting expand controller
I0603 13:21:10.059469       7 shared_informer.go:240] Waiting for caches to sync for expand
I0603 13:21:10.062789       7 controllermanager.go:554] Started "clusterrole-aggregation"
I0603 13:21:10.062844       7 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator
I0603 13:21:10.062849       7 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator
I0603 13:21:10.070443       7 controllermanager.go:554] Started "endpoint"
I0603 13:21:10.070527       7 endpoints_controller.go:184] Starting endpoint controller
I0603 13:21:10.070532       7 shared_informer.go:240] Waiting for caches to sync for endpoint
I0603 13:21:10.070591       7 shared_informer.go:240] Waiting for caches to sync for resource quota
E0603 13:21:10.076360       7 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0603 13:21:10.118603       7 shared_informer.go:247] Caches are synced for certificate-csrapproving
I0603 13:21:10.118754       7 shared_informer.go:247] Caches are synced for stateful set
I0603 13:21:10.119346       7 shared_informer.go:247] Caches are synced for crt configmap
I0603 13:21:10.119357       7 shared_informer.go:247] Caches are synced for ReplicaSet
W0603 13:21:10.121796       7 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="k3d-k3s-default-server-0" does not exist
I0603 13:21:10.125341       7 shared_informer.go:247] Caches are synced for taint
I0603 13:21:10.125382       7 node_lifecycle_controller.go:1429] Initializing eviction metric for zone:
W0603 13:21:10.125414       7 node_lifecycle_controller.go:1044] Missing timestamp for Node k3d-k3s-default-server-0. Assuming now as a timestamp.
I0603 13:21:10.125429       7 taint_manager.go:187] Starting NoExecuteTaintManager
I0603 13:21:10.125443       7 node_lifecycle_controller.go:1245] Controller detected that zone  is now in state Normal.
I0603 13:21:10.125490       7 event.go:291] "Event occurred" object="k3d-k3s-default-server-0" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node k3d-k3s-default-server-0 event: Registered Node k3d-k3s-default-server-0 in Controller"
I0603 13:21:10.128826       7 shared_informer.go:247] Caches are synced for node
I0603 13:21:10.128843       7 range_allocator.go:172] Starting range CIDR allocator
I0603 13:21:10.128847       7 shared_informer.go:240] Waiting for caches to sync for cidrallocator
I0603 13:21:10.128850       7 shared_informer.go:247] Caches are synced for cidrallocator
I0603 13:21:10.132394       7 range_allocator.go:373] Set node k3d-k3s-default-server-0 PodCIDR to [10.42.0.0/24]
I0603 13:21:10.133733       7 shared_informer.go:247] Caches are synced for ReplicationController
I0603 13:21:10.135848       7 shared_informer.go:247] Caches are synced for service account
I0603 13:21:10.140399       7 shared_informer.go:247] Caches are synced for endpoint_slice
I0603 13:21:10.140432       7 shared_informer.go:247] Caches are synced for disruption
I0603 13:21:10.140439       7 disruption.go:339] Sending events to api server.
I0603 13:21:10.143492       7 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring
I0603 13:21:10.146932       7 shared_informer.go:247] Caches are synced for GC
I0603 13:21:10.147617       7 shared_informer.go:247] Caches are synced for PVC protection
I0603 13:21:10.156297       7 shared_informer.go:247] Caches are synced for job
I0603 13:21:10.156310       7 shared_informer.go:247] Caches are synced for namespace
I0603 13:21:10.159116       7 shared_informer.go:247] Caches are synced for deployment
I0603 13:21:10.159513       7 shared_informer.go:247] Caches are synced for expand
I0603 13:21:10.162933       7 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
I0603 13:21:10.167626       7 shared_informer.go:247] Caches are synced for HPA
I0603 13:21:10.169036       7 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving
I0603 13:21:10.169111       7 shared_informer.go:247] Caches are synced for PV protection
I0603 13:21:10.169189       7 shared_informer.go:247] Caches are synced for daemon sets
I0603 13:21:10.169267       7 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client
I0603 13:21:10.169283       7 shared_informer.go:247] Caches are synced for TTL
I0603 13:21:10.169409       7 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client
I0603 13:21:10.169651       7 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown
I0603 13:21:10.170553       7 shared_informer.go:247] Caches are synced for endpoint
I0603 13:21:10.226583       7 kuberuntime_manager.go:1006] updating runtime config through cri with podcidr 10.42.0.0/24
I0603 13:21:10.226803       7 kubelet_network.go:77] Setting Pod CIDR:  -> 10.42.0.0/24
I0603 13:21:10.229461       7 shared_informer.go:247] Caches are synced for persistent volume
I0603 13:21:10.337359       7 shared_informer.go:247] Caches are synced for attach detach
I0603 13:21:10.352993       7 shared_informer.go:247] Caches are synced for resource quota
I0603 13:21:10.370642       7 shared_informer.go:247] Caches are synced for resource quota
I0603 13:21:10.623643       7 controller.go:609] quota admission added evaluator for: replicasets.apps
I0603 13:21:10.625645       7 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-86cbb8457f to 1"
I0603 13:21:10.625976       7 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-854c77959c to 1"
I0603 13:21:10.626690       7 event.go:291] "Event occurred" object="kube-system/local-path-provisioner" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set local-path-provisioner-5ff76fc89d to 1"
E0603 13:21:10.674839       7 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
E0603 13:21:10.674916       7 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
E0603 13:21:10.676961       7 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
E0603 13:21:10.721347       7 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0603 13:21:10.723514       7 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0603 13:21:10.823626       7 shared_informer.go:247] Caches are synced for garbage collector
I0603 13:21:10.868381       7 shared_informer.go:247] Caches are synced for garbage collector
I0603 13:21:10.868393       7 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0603 13:21:10.876028       7 event.go:291] "Event occurred" object="kube-system/metrics-server-86cbb8457f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-86cbb8457f-2p82z"
I0603 13:21:10.876727       7 event.go:291] "Event occurred" object="kube-system/coredns-854c77959c" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-854c77959c-b6gsv"
I0603 13:21:10.877348       7 event.go:291] "Event occurred" object="kube-system/local-path-provisioner-5ff76fc89d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: local-path-provisioner-5ff76fc89d-tlv4q"
I0603 13:21:10.879289       7 controller.go:609] quota admission added evaluator for: events.events.k8s.io
I0603 13:21:10.879573       7 topology_manager.go:187] [topologymanager] Topology Admit Handler
I0603 13:21:10.882905       7 topology_manager.go:187] [topologymanager] Topology Admit Handler
I0603 13:21:10.886243       7 topology_manager.go:187] [topologymanager] Topology Admit Handler
I0603 13:21:11.028064       7 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-dir" (UniqueName: "kubernetes.io/empty-dir/a72fc48a-ec24-4f5e-a1b2-a2e13dc4bed1-tmp-dir") pod "metrics-server-86cbb8457f-2p82z" (UID: "a72fc48a-ec24-4f5e-a1b2-a2e13dc4bed1")
I0603 13:21:11.028087       7 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "metrics-server-token-tqks8" (UniqueName: "kubernetes.io/secret/a72fc48a-ec24-4f5e-a1b2-a2e13dc4bed1-metrics-server-token-tqks8") pod "metrics-server-86cbb8457f-2p82z" (UID: "a72fc48a-ec24-4f5e-a1b2-a2e13dc4bed1")
I0603 13:21:11.028102       7 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a656cd2d-6086-4c4b-a4e2-e8d96615b26e-config-volume") pod "coredns-854c77959c-b6gsv" (UID: "a656cd2d-6086-4c4b-a4e2-e8d96615b26e")
I0603 13:21:11.028114       7 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-jwj9j" (UniqueName: "kubernetes.io/secret/a656cd2d-6086-4c4b-a4e2-e8d96615b26e-coredns-token-jwj9j") pod "coredns-854c77959c-b6gsv" (UID: "a656cd2d-6086-4c4b-a4e2-e8d96615b26e")
I0603 13:21:11.028127       7 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/10f5b1e0-2a8d-4041-9a38-4bda46be92c2-config-volume") pod "local-path-provisioner-5ff76fc89d-tlv4q" (UID: "10f5b1e0-2a8d-4041-9a38-4bda46be92c2")
I0603 13:21:11.028141       7 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "local-path-provisioner-service-account-token-xpvz5" (UniqueName: "kubernetes.io/secret/10f5b1e0-2a8d-4041-9a38-4bda46be92c2-local-path-provisioner-service-account-token-xpvz5") pod "local-path-provisioner-5ff76fc89d-tlv4q" (UID: "10f5b1e0-2a8d-4041-9a38-4bda46be92c2")
time="2021-06-03T13:21:11.613742723Z" level=info msg="Node CIDR assigned for: k3d-k3s-default-server-0"
I0603 13:21:11.613821       7 flannel.go:92] Determining IP address of default interface
I0603 13:21:11.613965       7 flannel.go:105] Using interface with name eth0 and address 172.21.0.2
I0603 13:21:11.615073       7 kube.go:117] Waiting 10m0s for node controller to sync
I0603 13:21:11.615088       7 kube.go:300] Starting kube subnet manager
time="2021-06-03T13:21:11.619557149Z" level=info msg="labels have been set successfully on node: k3d-k3s-default-server-0"
I0603 13:21:11.728638       7 network_policy_controller.go:138] Starting network policy controller
I0603 13:21:11.733506       7 network_policy_controller.go:145] Starting network policy controller full sync goroutine
W0603 13:21:11.775891       7 handler_proxy.go:102] no RequestInfo found in the context
E0603 13:21:11.775919       7 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0603 13:21:11.775927       7 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0603 13:21:12.615159       7 kube.go:124] Node controller sync successful
I0603 13:21:12.615192       7 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
I0603 13:21:12.623422       7 flannel.go:78] Wrote subnet file to /run/flannel/subnet.env
I0603 13:21:12.623431       7 flannel.go:82] Running backend.
I0603 13:21:12.623435       7 vxlan_network.go:60] watching for new subnet leases
I0603 13:21:12.624006       7 iptables.go:145] Some iptables rules are missing; deleting and recreating rules
I0603 13:21:12.624013       7 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
I0603 13:21:12.624127       7 iptables.go:145] Some iptables rules are missing; deleting and recreating rules
I0603 13:21:12.624133       7 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -j ACCEPT
I0603 13:21:12.624312       7 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
I0603 13:21:12.624374       7 iptables.go:167] Deleting iptables rule: -d 10.42.0.0/16 -j ACCEPT
I0603 13:21:12.624601       7 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/24 -j RETURN
I0603 13:21:12.624688       7 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -j ACCEPT
I0603 13:21:12.624861       7 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully
I0603 13:21:12.625070       7 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
I0603 13:21:12.625332       7 iptables.go:155] Adding iptables rule: -d 10.42.0.0/16 -j ACCEPT
I0603 13:21:12.625696       7 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
I0603 13:21:12.626202       7 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/24 -j RETURN
I0603 13:21:12.626708       7 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully
E0603 13:21:23.653267       7 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.53.226:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.53.226:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.43.53.226:443: connect: connection refused
E0603 13:21:23.653604       7 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.53.226:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.53.226:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.43.53.226:443: connect: connection refused
E0603 13:21:23.658446       7 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.53.226:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.53.226:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.43.53.226:443: connect: connection refused
E0603 13:21:23.678690       7 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.53.226:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.53.226:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.43.53.226:443: connect: connection refused
E0603 13:21:23.718927       7 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.53.226:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.53.226:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.43.53.226:443: connect: connection refused
E0603 13:21:23.799205       7 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.53.226:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.53.226:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.43.53.226:443: connect: connection refused
W0603 13:25:58.642742       7 info.go:53] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"

Versions

docker

Client:
 Version:           20.10.6
 API version:       1.41
 Go version:        go1.16.3
 Git commit:        370c28948e
 Built:             Mon Apr 12 14:10:41 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server:
 Engine:
  Version:          20.10.6
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.3
  Git commit:       8728dd246c
  Built:            Mon Apr 12 14:10:25 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.5.2
  GitCommit:        36cc874494a56a253cd181a1a685b44b58a2e34a.m
 runc:
  Version:          1.0.0-rc95
  GitCommit:        b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

docker info                                                                                                    main
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc., v0.5.1-tp-docker)

Server:
 Containers: 2
  Running: 2
  Paused: 0
  Stopped: 0
 Images: 11
 Server Version: 20.10.6
 Storage Driver: btrfs
  Build Version: Btrfs v5.11.1
  Library Version: 102
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 36cc874494a56a253cd181a1a685b44b58a2e34a.m
 runc version: b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
  cgroupns
 Kernel Version: 5.12.8-arch1-1
 Operating System: Arch Linux
 OSType: linux
 Architecture: x86_64
 CPUs: 24
 Total Memory: 31.36GiB
 Name: bamboo
 ID: COO7:XUNU:RFDR:ABK6:UHSJ:VPA5:C6UV:JTUX:FVBB:LQVB:QRJR:IFTI
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No kernel memory limit support
WARNING: No oom kill disable support
nguyenvulong commented 3 years ago

also check mine https://github.com/rancher/k3d/issues/619

i guess we have the same problem.

If you try to do this on another server, will it work? Mine worked fine on other two servers though. I am not sure what made it go wrong

iwilltry42 commented 3 years ago

Hi @jgresty , thanks for opening this issue! I couldn't do real research right now, but could this be related to the issue with the newer kernel versions as in https://github.com/k3s-io/k3s/pull/3337 ? :thinking: Can you give it a try with a newer version of k3s (--image flag)?

iwilltry42 commented 3 years ago

Hi @jgresty , any update on this? Looking through the server logs, it seems like my previous comment can be neglected (makes sense since you're using the workaround flags already). I was thinking of a clock skew between Docker and your host machine (which is why k3d won't catch the logs of the serverlb properly), but that looks OK as well. We could make the log message that we're waiting for just a little less specific, by only using starting worker process (dropping the es suffix), as those lines will only follow after the original log line :thinking: But then the time difference may still be too little.

FWIW: Can you connect to the cluster while k3d is still waiting? E.g. run k3d kubeconfig get $CLUSTER or k3d kubeconfig merge $CLUSTER to get the kubeconfig in a different terminal and then try to connect with kubectl.

jgresty commented 3 years ago

Sorry I forgot about this issue, worked around by running in a fedora vm but obviously that isn't ideal...

Tried to specify the latest image anyway (v1.21.1-k3s1) but that didn't help.

I can connect to the cluster with kubectl after running k3d kubeconfig merge k3s-default --kubeconfig-switch-context --kubeconfig-merge-default

➤ kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"archive", BuildDate:"2021-05-14T14:09:09Z", GoVersion:"go1.16.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1+k3s1", GitCommit:"75dba57f9b1de3ec0403b148c52c348e1dee2a5e", GitTreeState:"clean", BuildDate:"2021-05-21T16:12:29Z", GoVersion:"go1.16.4", Compiler:"gc", Platform:"linux/amd64"}
iwilltry42 commented 3 years ago

That's just such a weird issue :thinking: Would you be open to testing a debug release for this? I'd just add some more logs etc. to it to see where k3d fails to read the proper log messages :thinking:

jgresty commented 3 years ago

Sure, I'll be happy to assist

iwilltry42 commented 3 years ago

Before we dig deeper. Can you still reproduce this with the latest k3d release v4.4.5? (I guess so, but its worth a try :grimacing:)

jgresty commented 3 years ago

Can still reproduce with

k3d version v4.4.5
k3s version v1.21.1-k3s1 (default)
tglunde commented 3 years ago

I also can reproduce this on v4.4.5 with k3s v.1.21.1-k3s1 - how can I assist?

tglunde commented 3 years ago

name resolution is not an issue anymore. but i am getting this error now - every 30 s

proxier.go:1612] "Failed to execute iptables-restore" err="exit status 2 (ip6tables-restore v1.8.5 (nf_tables): unknown option \"--random-fully\"\nError occurred at line: 18\nTry `ip6tables-restore -h' or 'ip6tables-restore --help' for more information.\n)" I0614 12:39:41.039647 8 proxier.go:859] "Sync failed" retryingTime="30s"

tglunde commented 3 years ago

https://github.com/kubernetes/kubeadm/issues/784 ?

iwilltry42 commented 3 years ago

@tglunde , you have exactly the same issue? What are your system specs?

I unfortunately have no way of debugging this properly, as I don't have problems on any of my 5 test servers (all with different OS - Kernel - Docker version combinations). :confounded: But I also cannot exactly replicate your environment @jgresty :thinking: Anyone here who could provide me with a VM/Vagrant image to reproduce a failing system?

iwilltry42 commented 3 years ago

@jgresty & @tglunde So I created a debug release to at least try to get a clue about what's happening here. Would be great, if you could give it a try: https://drive.google.com/drive/folders/1JLlq6IUUn3OV_Mm7sQMjx8kg7epW-wxv?usp=sharing Download the release matching your setup and try it with the following command: K3D_LOG_NODE_WAIT_LOGS=loadbalancer k3d cluster create debug --trace

This should generate logs where the loadbalancer logs are included in the output to ensure that there's no clock skew involved (though, to be save, I also dropped subsecond precision from the wait command).

tglunde commented 3 years ago

not sure wether this is exactly the same, but similar. Now when I start the debug version you provided everything seems to go smooth without error. please find the log file attached. anything else i can do? if you like we could also do a zoom session. log.txt

iwilltry42 commented 3 years ago

@tglunde , if it's working now, the I suspect a rounding issue when determining the starting point where docker should start to watch the logs... Let's wait what others observe with the debug build. Btw I mistyped the env var in my previous comment and fixed that now.

tglunde commented 3 years ago

can i just use the debug version in the meantime or is there another workaround i could apply in the meantime?

iwilltry42 commented 3 years ago

@tglunde , feel free to use the debug version in the meantime, until there's another positive reply for it (then I'll create a real release from it). The debug version is not much different from a normal release. In fact, I only changed like 5 lines (https://github.com/rancher/k3d/commit/97451e11cb6444c6a38174d5a6e6e6ef5286aaf9).

tglunde commented 3 years ago

@iwilltry42 Cool. thank you so much for your efforts - highly appreciated!

jgresty commented 3 years ago

Let's wait what others observe with the debug build.

I can also confirm that the debug build posted worked perfectly on my setup (5.12.10-arch1-1)

nguyenvulong commented 3 years ago

Well this is strange .. I read the code change and only this line matters, doesn't it?

startTime = ts.Truncate(time.Second)

iwilltry42 commented 3 years ago

Let's wait what others observe with the debug build.

I can also confirm that the debug build posted worked perfectly on my setup (5.12.10-arch1-1)

@jgresty Awesome!

Well this is strange .. I read the code change and only this line matters, doesn't it?

startTime = ts.Truncate(time.Second)

@nguyenvulong Exactly. the fact that it works now confirms my assumption that this is caused by some rounding issue of the since timestamp used to start checking for logs. E.g. the original post shows this log line from k3d: TRAC[0006] NodeWaitForLogMessage: Node 'k3d-k3s-default-serverlb' waiting for log message 'start worker processes' since '2021-06-03 13:20:56.510628562 +0000 UTC' This shows the timestamp 2021-06-03 13:20:56.510628562 +0000 UTC which would be rounded to '2021-06-03 13:20:57 +0000 UTC', thus the process would start to look for the log line after it appeared: 2021/06/03 13:20:56 [notice] 14#14: start worker processes. The "fix" startTime = ts.Truncate(time.Second) just drops the subsecond precision, so it cannot start watching later than that. This could introduce an edge case when the loadbalancer is in a restart loop and has that log just in that few ms that are now included in the log check. However, this shouldn't happen, as the log won't show up if the setup is in a broken state that causes the restart loop. Additionally, we also check for the Restarting status.

prein commented 2 years ago

Is this considered resolved? Stale? I think I'm running into that issue.

Linux 5.13.0-39-generic x86_64 Docker version 20.10.14 k3d version v5.4.0 k3s version v1.22.7-k3s1 (default)

Running the following command

k3d cluster create mycluster -p "8081:80@loadbalancer" --agents 2 --trace 

Output (last lines before it hangs forever)

TRAC[0006] Starting node 'k3d-mycluster3-agent-1'       
TRAC[0006] Starting node 'k3d-mycluster3-agent-0'       
INFO[0006] Starting Node 'k3d-mycluster3-agent-1'       
INFO[0006] Starting Node 'k3d-mycluster3-agent-0'       
DEBU[0006] Truncated 2022-03-30 11:53:23.010548487 +0000 UTC to 2022-03-30 11:53:23 +0000 UTC 
DEBU[0006] Waiting for node k3d-mycluster3-agent-0 to get ready (Log: 'Successfully registered node') 
TRAC[0006] NodeWaitForLogMessage: Node 'k3d-mycluster3-agent-0' waiting for log message 'Successfully registered node' since '2022-03-30 11:53:23 +0000 UTC' 
DEBU[0006] Truncated 2022-03-30 11:53:23.115008808 +0000 UTC to 2022-03-30 11:53:23 +0000 UTC 
DEBU[0006] Waiting for node k3d-mycluster3-agent-1 to get ready (Log: 'Successfully registered node') 
TRAC[0006] NodeWaitForLogMessage: Node 'k3d-mycluster3-agent-1' waiting for log message 'Successfully registered node' since '2022-03-30 11:53:23 +0000 UTC' 

docker ps

$ docker ps
CONTAINER ID   IMAGE                      COMMAND                  CREATED          STATUS          PORTS                    NAMES
e671a034e568   rancher/k3s:v1.22.7-k3s1   "/bin/k3s agent"         46 seconds ago   Up 38 seconds                            k3d-mycluster-agent-1
894448f38f42   rancher/k3s:v1.22.7-k3s1   "/bin/k3s agent"         46 seconds ago   Up 38 seconds                            k3d-mycluster-agent-0
1e2edaf1319e   rancher/k3s:v1.22.7-k3s1   "/bin/k3s server --t…"   47 seconds ago   Up 44 seconds                            k3d-mycluster-server-0
c6a3f5dc38f5   registry:2                 "/entrypoint.sh /etc…"   23 hours ago     Up 2 hours      0.0.0.0:5000->5000/tcp   k3d-registry.localhost

k3d node list

$ k3d node list
NAME                     ROLE           CLUSTER     STATUS
k3d-mycluster-agent-0    agent          mycluster   running
k3d-mycluster-agent-1    agent          mycluster   running
k3d-mycluster-server-0   server         mycluster   running
k3d-mycluster-serverlb   loadbalancer   mycluster   created
k3d-registry.localhost   registry                   running

server container error logs

$ docker logs k3d-mycluster-server-0 2>&1 | grep -E '^E'
E0330 12:07:45.339923       7 controller.go:151] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.43.0.1"}: failed to allocate IP 10.43.0.1: cannot allocate resources of type serviceipallocations at this time
E0330 12:07:45.341161       7 controller.go:156] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.22.0.2, ResourceVersion: 0, AdditionalErrorMsg: 
E0330 12:07:51.639144       7 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
E0330 12:07:53.355881       7 controllermanager.go:419] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0330 12:07:59.876489       7 namespaced_resources_deleter.go:161] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0330 12:08:01.230181       7 resource_quota_controller.go:162] initial discovery check failure, continuing and counting on future sync update: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0330 12:08:01.236734       7 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0330 12:08:02.340606       7 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0330 12:08:02.347614       7 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0330 12:08:04.032992       7 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
E0330 12:08:32.639175       7 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0330 12:09:02.652890       7 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0330 12:09:04.034107       7 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
E0330 12:09:32.665580       7 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0330 12:10:02.683043       7 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0330 12:10:32.697176       7 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0330 12:11:02.708975       7 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0330 12:11:04.034324       7 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable

Server logs go like this in an endless loop:

time="2022-03-30T12:17:08Z" level=info msg="Waiting for control-plane node k3d-mycluster-server-0 startup: nodes \"k3d-mycluster-server-0\" not found"
time="2022-03-30T12:17:09Z" level=info msg="Waiting for control-plane node k3d-mycluster-server-0 startup: nodes \"k3d-mycluster-server-0\" not found"
time="2022-03-30T12:17:09Z" level=info msg="certificate CN=k3d-mycluster-agent-0 signed by CN=k3s-server-ca@1648642062: notBefore=2022-03-30 12:07:42 +0000 UTC notAfter=2023-03-30 12:17:09 +0000 UTC"
time="2022-03-30T12:17:09Z" level=info msg="certificate CN=k3d-mycluster-agent-1 signed by CN=k3s-server-ca@1648642062: notBefore=2022-03-30 12:07:42 +0000 UTC notAfter=2023-03-30 12:17:09 +0000 UTC"
time="2022-03-30T12:17:09Z" level=info msg="certificate CN=system:node:k3d-mycluster-agent-0,O=system:nodes signed by CN=k3s-client-ca@1648642062: notBefore=2022-03-30 12:07:42 +0000 UTC notAfter=2023-03-30 12:17:09 +0000 UTC"
time="2022-03-30T12:17:09Z" level=info msg="certificate CN=system:node:k3d-mycluster-agent-1,O=system:nodes signed by CN=k3s-client-ca@1648642062: notBefore=2022-03-30 12:07:42 +0000 UTC notAfter=2023-03-30 12:17:09 +0000 UTC"
time="2022-03-30T12:17:10Z" level=info msg="Waiting for control-plane node k3d-mycluster-server-0 startup: nodes \"k3d-mycluster-server-0\" not found"
time="2022-03-30T12:17:11Z" level=info msg="Waiting for control-plane node k3d-mycluster-server-0 startup: nodes \"k3d-mycluster-server-0\" not found"
time="2022-03-30T12:17:12Z" level=info msg="Waiting for control-plane node k3d-mycluster-server-0 startup: nodes \"k3d-mycluster-server-0\" not found"
time="2022-03-30T12:17:12Z" level=info msg="certificate CN=k3d-mycluster-server-0 signed by CN=k3s-server-ca@1648642062: notBefore=2022-03-30 12:07:42 +0000 UTC notAfter=2023-03-30 12:17:12 +0000 UTC"
time="2022-03-30T12:17:12Z" level=info msg="certificate CN=system:node:k3d-mycluster-server-0,O=system:nodes signed by CN=k3s-client-ca@1648642062: notBefore=2022-03-30 12:07:42 +0000 UTC notAfter=2023-03-30 12:17:12 +0000 UTC"
time="2022-03-30T12:17:12Z" level=info msg="Waiting to retrieve agent configuration; server is not ready: \"overlayfs\" snapshotter cannot be enabled for \"/var/lib/rancher/k3s/agent/containerd\", try using \"fuse-overlayfs\" or \"native\": failed to mount overlay: invalid argument"

Interesingly, it seems to run without issues if I don't create agents (run the k3d cluster create mycluster command without --agents flag). After that I can communicate with the cluster using kubectl. Attempt to add agent nodes using k3d node create agent -c mycluster --role agent command leads to the command hang forever.

prein commented 2 years ago

It seems that my issue may not have been related despite the similarity judging by the logs. For me it was that k3s won't run on ZFS without workarounds.