k3d-io / k3d

Little helper to run CNCF's k3s in Docker
https://k3d.io/
MIT License
5.46k stars 462 forks source link

[BUG] k3d-podiyumm-server-0 constantly restarting after cluster creation #722

Closed typekpb closed 3 years ago

typekpb commented 3 years ago

What did you do

What did you expect to happen

List all the pods.

Screenshots or terminal output

Docker container: k3d-podiyumm-server-0 of image: docker.io/rancher/k3s:latest keeps restarting, having in logs following output:

I0828 04:53:07.187440       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.

I0828 04:53:07.187526       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.

time="2021-08-28T04:53:07.208827200Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0"

time="2021-08-28T04:53:07.210072200Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"

time="2021-08-28T04:53:07.211942000Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"

time="2021-08-28T04:53:07.211984900Z" level=info msg="To join node to cluster: k3s agent -s https://172.29.0.2:6443 -t ${NODE_TOKEN}"

time="2021-08-28T04:53:07.213462700Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"

time="2021-08-28T04:53:07.213667900Z" level=info msg="Run: k3s kubectl"

time="2021-08-28T04:53:07.215033600Z" level=info msg="Waiting for API server to become available"

time="2021-08-28T04:53:07.237411100Z" level=info msg="Cluster-Http-Server 2021/08/28 04:53:07 http: TLS handshake error from 127.0.0.1:54500: remote error: tls: bad certificate"

time="2021-08-28T04:53:07.244905900Z" level=info msg="Cluster-Http-Server 2021/08/28 04:53:07 http: TLS handshake error from 127.0.0.1:54510: remote error: tls: bad certificate"

time="2021-08-28T04:53:07.256227600Z" level=info msg="certificate CN=k3d-podiyumm-server-0 signed by CN=k3s-server-ca@1630126376: notBefore=2021-08-28 04:52:56 +0000 UTC notAfter=2022-08-28 04:53:07 +0000 UTC"

time="2021-08-28T04:53:07.259408300Z" level=info msg="certificate CN=system:node:k3d-podiyumm-server-0,O=system:nodes signed by CN=k3s-client-ca@1630126376: notBefore=2021-08-28 04:52:56 +0000 UTC notAfter=2022-08-28 04:53:07 +0000 UTC"

time="2021-08-28T04:53:07.273607000Z" level=info msg="Module overlay was already loaded"

time="2021-08-28T04:53:07.273694600Z" level=info msg="Module nf_conntrack was already loaded"

time="2021-08-28T04:53:07.274459200Z" level=warning msg="Failed to start br_netfilter module"

time="2021-08-28T04:53:07.274897900Z" level=warning msg="Failed to start iptable_nat module"

time="2021-08-28T04:53:07.279166600Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"

time="2021-08-28T04:53:07.279777100Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"

time="2021-08-28T04:53:08.286996300Z" level=info msg="Containerd is now running"

time="2021-08-28T04:53:08.293548000Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"

time="2021-08-28T04:53:08.295386500Z" level=info msg="Handling backend connection request [k3d-podiyumm-server-0]"

time="2021-08-28T04:53:08.296430500Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"

time="2021-08-28T04:53:08.296713300Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-podiyumm-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --runtime-cgroups=/k3s --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"

time="2021-08-28T04:53:08.297515200Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=k3d-podiyumm-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"

Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.

Flag --cni-bin-dir has been deprecated, will be removed along with dockershim.

Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.

Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.

I0828 04:53:08.300508       7 server.go:436] "Kubelet version" kubeletVersion="v1.21.0+k3s1"

W0828 04:53:08.317515       7 server.go:220] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.

W0828 04:53:08.317781       7 proxier.go:653] Failed to read file /lib/modules/5.10.47-linuxkit/modules.builtin with error open /lib/modules/5.10.47-linuxkit/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:08.318457       7 proxier.go:663] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

I0828 04:53:08.319397       7 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt

W0828 04:53:08.327873       7 proxier.go:663] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:08.328574       7 proxier.go:663] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:08.329313       7 proxier.go:663] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:08.329669       7 proxier.go:663] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

E0828 04:53:08.341441       7 node.go:161] Failed to retrieve node info: nodes "k3d-podiyumm-server-0" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope

I0828 04:53:09.022772       7 secure_serving.go:197] Serving securely on 127.0.0.1:6444

I0828 04:53:09.022924       7 autoregister_controller.go:141] Starting autoregister controller

I0828 04:53:09.022961       7 cache.go:32] Waiting for caches to sync for autoregister controller

I0828 04:53:09.023214       7 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt

I0828 04:53:09.023329       7 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key

I0828 04:53:09.023452       7 tlsconfig.go:240] Starting DynamicServingCertificateController

I0828 04:53:09.026899       7 customresource_discovery_controller.go:209] Starting DiscoveryController

I0828 04:53:09.027014       7 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key

I0828 04:53:09.027131       7 controller.go:83] Starting OpenAPI AggregationController

I0828 04:53:09.027254       7 apf_controller.go:294] Starting API Priority and Fairness config controller

I0828 04:53:09.027745       7 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller

I0828 04:53:09.027799       7 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller

I0828 04:53:09.027894       7 available_controller.go:475] Starting AvailableConditionController

I0828 04:53:09.027955       7 cache.go:32] Waiting for caches to sync for AvailableConditionController controller

I0828 04:53:09.028336       7 apiservice_controller.go:97] Starting APIServiceRegistrationController

I0828 04:53:09.028389       7 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller

I0828 04:53:09.028607       7 crdregistration_controller.go:111] Starting crd-autoregister controller

I0828 04:53:09.028660       7 shared_informer.go:240] Waiting for caches to sync for crd-autoregister

I0828 04:53:09.028814       7 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt

I0828 04:53:09.036211       7 controller.go:86] Starting OpenAPI controller

I0828 04:53:09.036306       7 naming_controller.go:291] Starting NamingConditionController

I0828 04:53:09.036418       7 establishing_controller.go:76] Starting EstablishingController

I0828 04:53:09.036553       7 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController

I0828 04:53:09.036663       7 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController

I0828 04:53:09.036747       7 crd_finalizer.go:266] Starting CRDFinalizer

I0828 04:53:09.038412       7 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt

I0828 04:53:09.038571       7 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt

time="2021-08-28T04:53:09.090540000Z" level=info msg="Running cloud-controller-manager with args --profiling=false"

I0828 04:53:09.092670       7 controllermanager.go:142] Version: v1.21.0+k3s1

E0828 04:53:09.125735       7 controller.go:156] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service

I0828 04:53:09.132685       7 cache.go:39] Caches are synced for autoregister controller

I0828 04:53:09.133878       7 shared_informer.go:247] Caches are synced for node_authorizer 

I0828 04:53:09.134071       7 apf_controller.go:299] Running API Priority and Fairness config worker

I0828 04:53:09.134695       7 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 

I0828 04:53:09.134746       7 cache.go:39] Caches are synced for AvailableConditionController controller

I0828 04:53:09.136929       7 shared_informer.go:247] Caches are synced for crd-autoregister 

I0828 04:53:09.153702       7 cache.go:39] Caches are synced for APIServiceRegistrationController controller

I0828 04:53:09.173217       7 controller.go:611] quota admission added evaluator for: namespaces

I0828 04:53:09.190923       7 trace.go:205] Trace[413468906]: "Create" url:/api/v1/namespaces,user-agent:kubectl/v1.21.3 (darwin/amd64) kubernetes/ca643a4,client:172.29.0.3,accept:application/json, */*,protocol:HTTP/1.1 (28-Aug-2021 04:53:07.299) (total time: 1891ms):

Trace[413468906]: ---"Object stored in database" 1889ms (04:53:00.190)

Trace[413468906]: [1.8910987s] [1.8910987s] END

I0828 04:53:09.473527       7 node.go:172] Successfully retrieved node IP: 172.29.0.2

I0828 04:53:09.473783       7 server_others.go:141] Detected node IP 172.29.0.2

I0828 04:53:09.475295       7 server_others.go:207] kube-proxy running in dual-stack mode, IPv4-primary

I0828 04:53:09.475482       7 server_others.go:213] Using iptables Proxier.

I0828 04:53:09.475657       7 server_others.go:220] creating dualStackProxier for iptables.

W0828 04:53:09.475834       7 server_others.go:513] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6

I0828 04:53:09.476458       7 server.go:643] Version: v1.21.0+k3s1

W0828 04:53:09.476853       7 sysinfo.go:203] Nodes topology is not available, providing CPU topology

I0828 04:53:09.477687       7 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072

F0828 04:53:09.477866       7 server.go:489] open /proc/sys/net/netfilter/nf_conntrack_max: permission denied

time="2021-08-28T04:53:10.208465600Z" level=info msg="Starting k3s v1.21.0+k3s1 (2705431d)"

time="2021-08-28T04:53:10.208945200Z" level=info msg="Cluster bootstrap already complete"

time="2021-08-28T04:53:10.220938500Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"

time="2021-08-28T04:53:10.220980300Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."

time="2021-08-28T04:53:10.221231300Z" level=info msg="Database tables and indexes are up to date"

time="2021-08-28T04:53:10.222071400Z" level=info msg="Kine listening on unix://kine.sock"

time="2021-08-28T04:53:10.222362500Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.

I0828 04:53:10.223327       6 server.go:656] external host was not specified, using 172.29.0.2

I0828 04:53:10.223563       6 server.go:195] Version: v1.21.0+k3s1

I0828 04:53:10.226816       6 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.

I0828 04:53:10.226845       6 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.

I0828 04:53:10.228020       6 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.

I0828 04:53:10.228051       6 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.

I0828 04:53:10.229814       6 shared_informer.go:240] Waiting for caches to sync for node_authorizer

I0828 04:53:10.257413       6 instance.go:283] Using reconciler: lease

I0828 04:53:10.296323       6 rest.go:130] the default service ipfamily for this cluster is: IPv4

W0828 04:53:10.611388       6 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:10.618608       6 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:10.621019       6 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:10.625078       6 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:10.626859       6 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:10.630697       6 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources.

W0828 04:53:10.630806       6 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources.

I0828 04:53:10.637649       6 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.

I0828 04:53:10.638022       6 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.

time="2021-08-28T04:53:10.654617400Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0"

time="2021-08-28T04:53:10.655196200Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"

time="2021-08-28T04:53:10.656512400Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"

time="2021-08-28T04:53:10.656551100Z" level=info msg="To join node to cluster: k3s agent -s https://172.29.0.2:6443 -t ${NODE_TOKEN}"

time="2021-08-28T04:53:10.657538300Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"

time="2021-08-28T04:53:10.657600300Z" level=info msg="Run: k3s kubectl"

time="2021-08-28T04:53:10.658623800Z" level=info msg="Waiting for API server to become available"

time="2021-08-28T04:53:10.680968300Z" level=info msg="Cluster-Http-Server 2021/08/28 04:53:10 http: TLS handshake error from 127.0.0.1:54600: remote error: tls: bad certificate"

time="2021-08-28T04:53:10.686070200Z" level=info msg="Cluster-Http-Server 2021/08/28 04:53:10 http: TLS handshake error from 127.0.0.1:54610: remote error: tls: bad certificate"

time="2021-08-28T04:53:10.698435300Z" level=info msg="certificate CN=k3d-podiyumm-server-0 signed by CN=k3s-server-ca@1630126376: notBefore=2021-08-28 04:52:56 +0000 UTC notAfter=2022-08-28 04:53:10 +0000 UTC"

time="2021-08-28T04:53:10.701849800Z" level=info msg="certificate CN=system:node:k3d-podiyumm-server-0,O=system:nodes signed by CN=k3s-client-ca@1630126376: notBefore=2021-08-28 04:52:56 +0000 UTC notAfter=2022-08-28 04:53:10 +0000 UTC"

time="2021-08-28T04:53:10.713815400Z" level=info msg="Module overlay was already loaded"

time="2021-08-28T04:53:10.713953600Z" level=info msg="Module nf_conntrack was already loaded"

time="2021-08-28T04:53:10.714647100Z" level=warning msg="Failed to start br_netfilter module"

time="2021-08-28T04:53:10.715079500Z" level=warning msg="Failed to start iptable_nat module"

time="2021-08-28T04:53:10.718905500Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"

time="2021-08-28T04:53:10.719483200Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"

time="2021-08-28T04:53:11.725323500Z" level=info msg="Containerd is now running"

time="2021-08-28T04:53:11.731109600Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"

time="2021-08-28T04:53:11.732788100Z" level=info msg="Handling backend connection request [k3d-podiyumm-server-0]"

time="2021-08-28T04:53:11.733333800Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"

time="2021-08-28T04:53:11.733385900Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-podiyumm-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --runtime-cgroups=/k3s --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"

time="2021-08-28T04:53:11.734039900Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=k3d-podiyumm-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"

Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.

Flag --cni-bin-dir has been deprecated, will be removed along with dockershim.

Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.

Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.

I0828 04:53:11.736262       6 server.go:436] "Kubelet version" kubeletVersion="v1.21.0+k3s1"

W0828 04:53:11.753013       6 server.go:220] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.

W0828 04:53:11.753454       6 proxier.go:653] Failed to read file /lib/modules/5.10.47-linuxkit/modules.builtin with error open /lib/modules/5.10.47-linuxkit/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:11.753950       6 proxier.go:663] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:11.754388       6 proxier.go:663] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:11.754774       6 proxier.go:663] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

I0828 04:53:11.754960       6 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt

W0828 04:53:11.755327       6 proxier.go:663] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:11.755719       6 proxier.go:663] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

E0828 04:53:11.764381       6 node.go:161] Failed to retrieve node info: nodes "k3d-podiyumm-server-0" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope

I0828 04:53:12.496300       6 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt

I0828 04:53:12.496707       6 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt

I0828 04:53:12.497103       6 secure_serving.go:197] Serving securely on 127.0.0.1:6444

I0828 04:53:12.497473       6 apiservice_controller.go:97] Starting APIServiceRegistrationController

I0828 04:53:12.497516       6 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller

I0828 04:53:12.497588       6 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key

I0828 04:53:12.498048       6 tlsconfig.go:240] Starting DynamicServingCertificateController

I0828 04:53:12.498998       6 apf_controller.go:294] Starting API Priority and Fairness config controller

I0828 04:53:12.499258       6 customresource_discovery_controller.go:209] Starting DiscoveryController

I0828 04:53:12.499949       6 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key

I0828 04:53:12.500206       6 controller.go:83] Starting OpenAPI AggregationController

I0828 04:53:12.500964       6 autoregister_controller.go:141] Starting autoregister controller

I0828 04:53:12.501072       6 cache.go:32] Waiting for caches to sync for autoregister controller

I0828 04:53:12.502882       6 controller.go:86] Starting OpenAPI controller

I0828 04:53:12.502924       6 naming_controller.go:291] Starting NamingConditionController

I0828 04:53:12.503045       6 establishing_controller.go:76] Starting EstablishingController

I0828 04:53:12.503077       6 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController

I0828 04:53:12.503219       6 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController

I0828 04:53:12.503260       6 crd_finalizer.go:266] Starting CRDFinalizer

I0828 04:53:12.503915       6 available_controller.go:475] Starting AvailableConditionController

I0828 04:53:12.504152       6 cache.go:32] Waiting for caches to sync for AvailableConditionController controller

I0828 04:53:12.504386       6 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt

I0828 04:53:12.504523       6 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt

I0828 04:53:12.504665       6 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller

I0828 04:53:12.504783       6 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller

I0828 04:53:12.504974       6 crdregistration_controller.go:111] Starting crd-autoregister controller

I0828 04:53:12.505091       6 shared_informer.go:240] Waiting for caches to sync for crd-autoregister

time="2021-08-28T04:53:12.548167200Z" level=info msg="Running cloud-controller-manager with args --profiling=false"

I0828 04:53:12.550339       6 controllermanager.go:142] Version: v1.21.0+k3s1

E0828 04:53:12.591514       6 controller.go:156] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service

I0828 04:53:12.597576       6 cache.go:39] Caches are synced for APIServiceRegistrationController controller

I0828 04:53:12.601451       6 cache.go:39] Caches are synced for autoregister controller

I0828 04:53:12.604492       6 cache.go:39] Caches are synced for AvailableConditionController controller

I0828 04:53:12.607529       6 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 

I0828 04:53:12.607668       6 shared_informer.go:247] Caches are synced for crd-autoregister 

I0828 04:53:12.609993       6 apf_controller.go:299] Running API Priority and Fairness config worker

I0828 04:53:12.630478       6 shared_informer.go:247] Caches are synced for node_authorizer 

I0828 04:53:12.875744       6 node.go:172] Successfully retrieved node IP: 172.29.0.2

I0828 04:53:12.875777       6 server_others.go:141] Detected node IP 172.29.0.2

I0828 04:53:12.880656       6 server_others.go:207] kube-proxy running in dual-stack mode, IPv4-primary

I0828 04:53:12.880688       6 server_others.go:213] Using iptables Proxier.

I0828 04:53:12.880703       6 server_others.go:220] creating dualStackProxier for iptables.

W0828 04:53:12.880716       6 server_others.go:513] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6

I0828 04:53:12.881117       6 server.go:643] Version: v1.21.0+k3s1

W0828 04:53:12.881487       6 sysinfo.go:203] Nodes topology is not available, providing CPU topology

I0828 04:53:12.882174       6 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072

F0828 04:53:12.882212       6 server.go:489] open /proc/sys/net/netfilter/nf_conntrack_max: permission denied

time="2021-08-28T04:53:13.821231900Z" level=info msg="Starting k3s v1.21.0+k3s1 (2705431d)"

time="2021-08-28T04:53:13.821903900Z" level=info msg="Cluster bootstrap already complete"

time="2021-08-28T04:53:13.833174900Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"

time="2021-08-28T04:53:13.833213900Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."

time="2021-08-28T04:53:13.833485300Z" level=info msg="Database tables and indexes are up to date"

time="2021-08-28T04:53:13.834304900Z" level=info msg="Kine listening on unix://kine.sock"

time="2021-08-28T04:53:13.834466300Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.

I0828 04:53:13.835437       9 server.go:656] external host was not specified, using 172.29.0.2

I0828 04:53:13.835785       9 server.go:195] Version: v1.21.0+k3s1

I0828 04:53:13.838702       9 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.

I0828 04:53:13.838730       9 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.

I0828 04:53:13.839618       9 shared_informer.go:240] Waiting for caches to sync for node_authorizer

I0828 04:53:13.840062       9 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.

I0828 04:53:13.840090       9 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.

I0828 04:53:13.865253       9 instance.go:283] Using reconciler: lease

I0828 04:53:13.906932       9 rest.go:130] the default service ipfamily for this cluster is: IPv4

W0828 04:53:14.236887       9 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:14.244567       9 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:14.248111       9 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:14.252567       9 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:14.254421       9 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:14.258966       9 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources.

W0828 04:53:14.259119       9 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources.

I0828 04:53:14.265381       9 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.

I0828 04:53:14.265603       9 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.

time="2021-08-28T04:53:14.283938600Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0"

time="2021-08-28T04:53:14.284719300Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"

time="2021-08-28T04:53:14.285821100Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"

time="2021-08-28T04:53:14.286207300Z" level=info msg="To join node to cluster: k3s agent -s https://172.29.0.2:6443 -t ${NODE_TOKEN}"

time="2021-08-28T04:53:14.287148200Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"

time="2021-08-28T04:53:14.287378600Z" level=info msg="Run: k3s kubectl"

time="2021-08-28T04:53:14.288025200Z" level=info msg="Waiting for API server to become available"

time="2021-08-28T04:53:14.302619100Z" level=info msg="Cluster-Http-Server 2021/08/28 04:53:14 http: TLS handshake error from 127.0.0.1:54712: remote error: tls: bad certificate"

time="2021-08-28T04:53:14.307711000Z" level=info msg="Cluster-Http-Server 2021/08/28 04:53:14 http: TLS handshake error from 127.0.0.1:54722: remote error: tls: bad certificate"

time="2021-08-28T04:53:14.317750900Z" level=info msg="certificate CN=k3d-podiyumm-server-0 signed by CN=k3s-server-ca@1630126376: notBefore=2021-08-28 04:52:56 +0000 UTC notAfter=2022-08-28 04:53:14 +0000 UTC"

time="2021-08-28T04:53:14.321511900Z" level=info msg="certificate CN=system:node:k3d-podiyumm-server-0,O=system:nodes signed by CN=k3s-client-ca@1630126376: notBefore=2021-08-28 04:52:56 +0000 UTC notAfter=2022-08-28 04:53:14 +0000 UTC"

time="2021-08-28T04:53:14.334580500Z" level=info msg="Module overlay was already loaded"

time="2021-08-28T04:53:14.334635100Z" level=info msg="Module nf_conntrack was already loaded"

time="2021-08-28T04:53:14.335047400Z" level=warning msg="Failed to start br_netfilter module"

time="2021-08-28T04:53:14.335455800Z" level=warning msg="Failed to start iptable_nat module"

time="2021-08-28T04:53:14.340213000Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"

time="2021-08-28T04:53:14.340785500Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"

time="2021-08-28T04:53:15.351655900Z" level=info msg="Containerd is now running"

time="2021-08-28T04:53:15.358925900Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"

time="2021-08-28T04:53:15.360752400Z" level=info msg="Handling backend connection request [k3d-podiyumm-server-0]"

time="2021-08-28T04:53:15.361701600Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"

time="2021-08-28T04:53:15.361876500Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-podiyumm-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --runtime-cgroups=/k3s --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"

time="2021-08-28T04:53:15.362954900Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=k3d-podiyumm-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"

Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.

Flag --cni-bin-dir has been deprecated, will be removed along with dockershim.

Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.

Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.

I0828 04:53:15.366368       9 server.go:436] "Kubelet version" kubeletVersion="v1.21.0+k3s1"

W0828 04:53:15.375639       9 server.go:220] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.

W0828 04:53:15.376096       9 proxier.go:653] Failed to read file /lib/modules/5.10.47-linuxkit/modules.builtin with error open /lib/modules/5.10.47-linuxkit/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:15.376985       9 proxier.go:663] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:15.377496       9 proxier.go:663] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:15.378056       9 proxier.go:663] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:15.378567       9 proxier.go:663] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:15.379279       9 proxier.go:663] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

I0828 04:53:15.400106       9 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt

E0828 04:53:15.403823       9 node.go:161] Failed to retrieve node info: nodes "k3d-podiyumm-server-0" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope

I0828 04:53:16.066636       9 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt

I0828 04:53:16.066703       9 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt

I0828 04:53:16.067108       9 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key

I0828 04:53:16.067747       9 secure_serving.go:197] Serving securely on 127.0.0.1:6444

I0828 04:53:16.068431       9 tlsconfig.go:240] Starting DynamicServingCertificateController

I0828 04:53:16.069162       9 apf_controller.go:294] Starting API Priority and Fairness config controller

I0828 04:53:16.069748       9 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller

I0828 04:53:16.069775       9 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller

I0828 04:53:16.067826       9 controller.go:83] Starting OpenAPI AggregationController

I0828 04:53:16.074778       9 apiservice_controller.go:97] Starting APIServiceRegistrationController

I0828 04:53:16.074942       9 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller

I0828 04:53:16.075341       9 autoregister_controller.go:141] Starting autoregister controller

I0828 04:53:16.075444       9 cache.go:32] Waiting for caches to sync for autoregister controller

I0828 04:53:16.079054       9 customresource_discovery_controller.go:209] Starting DiscoveryController

I0828 04:53:16.079432       9 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key

I0828 04:53:16.079647       9 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt

I0828 04:53:16.079771       9 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt

I0828 04:53:16.079994       9 available_controller.go:475] Starting AvailableConditionController

I0828 04:53:16.080149       9 cache.go:32] Waiting for caches to sync for AvailableConditionController controller

I0828 04:53:16.080433       9 crdregistration_controller.go:111] Starting crd-autoregister controller

I0828 04:53:16.080548       9 shared_informer.go:240] Waiting for caches to sync for crd-autoregister

I0828 04:53:16.082193       9 controller.go:86] Starting OpenAPI controller

I0828 04:53:16.082314       9 naming_controller.go:291] Starting NamingConditionController

I0828 04:53:16.082488       9 establishing_controller.go:76] Starting EstablishingController

I0828 04:53:16.082631       9 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController

I0828 04:53:16.082770       9 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController

I0828 04:53:16.082904       9 crd_finalizer.go:266] Starting CRDFinalizer

time="2021-08-28T04:53:16.088060700Z" level=info msg="Running cloud-controller-manager with args --profiling=false"

I0828 04:53:16.090276       9 controllermanager.go:142] Version: v1.21.0+k3s1

I0828 04:53:16.140860       9 shared_informer.go:247] Caches are synced for node_authorizer 

E0828 04:53:16.164131       9 controller.go:156] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service

I0828 04:53:16.169242       9 apf_controller.go:299] Running API Priority and Fairness config worker

I0828 04:53:16.170848       9 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 

I0828 04:53:16.175241       9 cache.go:39] Caches are synced for APIServiceRegistrationController controller

I0828 04:53:16.179658       9 cache.go:39] Caches are synced for autoregister controller

I0828 04:53:16.187683       9 cache.go:39] Caches are synced for AvailableConditionController controller

I0828 04:53:16.189931       9 shared_informer.go:247] Caches are synced for crd-autoregister 

I0828 04:53:16.478792       9 node.go:172] Successfully retrieved node IP: 172.29.0.2

I0828 04:53:16.478828       9 server_others.go:141] Detected node IP 172.29.0.2

I0828 04:53:16.482405       9 server_others.go:207] kube-proxy running in dual-stack mode, IPv4-primary

I0828 04:53:16.482438       9 server_others.go:213] Using iptables Proxier.

I0828 04:53:16.482454       9 server_others.go:220] creating dualStackProxier for iptables.

W0828 04:53:16.482578       9 server_others.go:513] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6

I0828 04:53:16.482964       9 server.go:643] Version: v1.21.0+k3s1

W0828 04:53:16.483309       9 sysinfo.go:203] Nodes topology is not available, providing CPU topology

I0828 04:53:16.483965       9 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072

F0828 04:53:16.484005       9 server.go:489] open /proc/sys/net/netfilter/nf_conntrack_max: permission denied

time="2021-08-28T04:53:17.784419500Z" level=info msg="Starting k3s v1.21.0+k3s1 (2705431d)"

time="2021-08-28T04:53:17.784999700Z" level=info msg="Cluster bootstrap already complete"

time="2021-08-28T04:53:17.794987400Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"

time="2021-08-28T04:53:17.795025700Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."

time="2021-08-28T04:53:17.795441000Z" level=info msg="Database tables and indexes are up to date"

time="2021-08-28T04:53:17.796610700Z" level=info msg="Kine listening on unix://kine.sock"

time="2021-08-28T04:53:17.796835100Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.

I0828 04:53:17.797805       6 server.go:656] external host was not specified, using 172.29.0.2

I0828 04:53:17.798068       6 server.go:195] Version: v1.21.0+k3s1

I0828 04:53:17.801553       6 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.

I0828 04:53:17.801582       6 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.

I0828 04:53:17.803043       6 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.

I0828 04:53:17.803071       6 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.

I0828 04:53:17.809789       6 shared_informer.go:240] Waiting for caches to sync for node_authorizer

I0828 04:53:17.829824       6 instance.go:283] Using reconciler: lease

I0828 04:53:17.873489       6 rest.go:130] the default service ipfamily for this cluster is: IPv4

W0828 04:53:18.169391       6 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:18.178688       6 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:18.181678       6 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:18.186798       6 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:18.188981       6 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:18.194376       6 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources.

W0828 04:53:18.194410       6 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources.

I0828 04:53:18.201696       6 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.

I0828 04:53:18.201714       6 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.

time="2021-08-28T04:53:18.217922500Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0"

time="2021-08-28T04:53:18.218581800Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"

time="2021-08-28T04:53:18.219744700Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"

time="2021-08-28T04:53:18.219779500Z" level=info msg="To join node to cluster: k3s agent -s https://172.29.0.2:6443 -t ${NODE_TOKEN}"

time="2021-08-28T04:53:18.220571100Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"

time="2021-08-28T04:53:18.220698600Z" level=info msg="Run: k3s kubectl"

time="2021-08-28T04:53:18.221208500Z" level=info msg="Waiting for API server to become available"

time="2021-08-28T04:53:18.243404600Z" level=info msg="Cluster-Http-Server 2021/08/28 04:53:18 http: TLS handshake error from 127.0.0.1:54806: remote error: tls: bad certificate"

time="2021-08-28T04:53:18.252838500Z" level=info msg="Cluster-Http-Server 2021/08/28 04:53:18 http: TLS handshake error from 127.0.0.1:54816: remote error: tls: bad certificate"

time="2021-08-28T04:53:18.290805600Z" level=info msg="certificate CN=k3d-podiyumm-server-0 signed by CN=k3s-server-ca@1630126376: notBefore=2021-08-28 04:52:56 +0000 UTC notAfter=2022-08-28 04:53:18 +0000 UTC"

time="2021-08-28T04:53:18.296074300Z" level=info msg="certificate CN=system:node:k3d-podiyumm-server-0,O=system:nodes signed by CN=k3s-client-ca@1630126376: notBefore=2021-08-28 04:52:56 +0000 UTC notAfter=2022-08-28 04:53:18 +0000 UTC"

time="2021-08-28T04:53:18.370389400Z" level=info msg="Module overlay was already loaded"

time="2021-08-28T04:53:18.370642300Z" level=info msg="Module nf_conntrack was already loaded"

time="2021-08-28T04:53:18.371447200Z" level=warning msg="Failed to start br_netfilter module"

time="2021-08-28T04:53:18.372326300Z" level=warning msg="Failed to start iptable_nat module"

time="2021-08-28T04:53:18.377334500Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"

time="2021-08-28T04:53:18.377839000Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"

time="2021-08-28T04:53:19.394194600Z" level=info msg="Containerd is now running"

time="2021-08-28T04:53:19.400410000Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"

time="2021-08-28T04:53:19.402042900Z" level=info msg="Handling backend connection request [k3d-podiyumm-server-0]"

time="2021-08-28T04:53:19.402600400Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"

time="2021-08-28T04:53:19.402841200Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-podiyumm-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --runtime-cgroups=/k3s --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"

time="2021-08-28T04:53:19.403332100Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=k3d-podiyumm-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"

Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.

Flag --cni-bin-dir has been deprecated, will be removed along with dockershim.

Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.

Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.

I0828 04:53:19.405677       6 server.go:436] "Kubelet version" kubeletVersion="v1.21.0+k3s1"

W0828 04:53:19.422382       6 server.go:220] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.

W0828 04:53:19.422731       6 proxier.go:653] Failed to read file /lib/modules/5.10.47-linuxkit/modules.builtin with error open /lib/modules/5.10.47-linuxkit/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:19.423197       6 proxier.go:663] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:19.423562       6 proxier.go:663] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:19.423959       6 proxier.go:663] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:19.424367       6 proxier.go:663] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:19.424908       6 proxier.go:663] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

I0828 04:53:19.429370       6 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt

E0828 04:53:19.438984       6 node.go:161] Failed to retrieve node info: nodes "k3d-podiyumm-server-0" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope

I0828 04:53:20.015933       6 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt

I0828 04:53:20.016109       6 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt

I0828 04:53:20.016561       6 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key

I0828 04:53:20.016909       6 secure_serving.go:197] Serving securely on 127.0.0.1:6444

I0828 04:53:20.017196       6 tlsconfig.go:240] Starting DynamicServingCertificateController

I0828 04:53:20.017428       6 controller.go:83] Starting OpenAPI AggregationController

I0828 04:53:20.017677       6 apf_controller.go:294] Starting API Priority and Fairness config controller

I0828 04:53:20.018463       6 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller

I0828 04:53:20.018488       6 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller

I0828 04:53:20.018517       6 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key

I0828 04:53:20.018611       6 available_controller.go:475] Starting AvailableConditionController

I0828 04:53:20.018700       6 cache.go:32] Waiting for caches to sync for AvailableConditionController controller

I0828 04:53:20.018818       6 customresource_discovery_controller.go:209] Starting DiscoveryController

I0828 04:53:20.019214       6 apiservice_controller.go:97] Starting APIServiceRegistrationController

I0828 04:53:20.019288       6 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller

I0828 04:53:20.021218       6 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt

I0828 04:53:20.021395       6 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt

I0828 04:53:20.022122       6 crdregistration_controller.go:111] Starting crd-autoregister controller

I0828 04:53:20.022146       6 shared_informer.go:240] Waiting for caches to sync for crd-autoregister

I0828 04:53:20.022318       6 controller.go:86] Starting OpenAPI controller

I0828 04:53:20.022513       6 naming_controller.go:291] Starting NamingConditionController

I0828 04:53:20.022869       6 establishing_controller.go:76] Starting EstablishingController

I0828 04:53:20.023031       6 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController

I0828 04:53:20.023182       6 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController

I0828 04:53:20.023223       6 crd_finalizer.go:266] Starting CRDFinalizer

I0828 04:53:20.018716       6 autoregister_controller.go:141] Starting autoregister controller

I0828 04:53:20.030223       6 cache.go:32] Waiting for caches to sync for autoregister controller

time="2021-08-28T04:53:20.087472400Z" level=info msg="Running cloud-controller-manager with args --profiling=false"

I0828 04:53:20.089742       6 controllermanager.go:142] Version: v1.21.0+k3s1

E0828 04:53:20.105955       6 controller.go:156] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service

I0828 04:53:20.110050       6 shared_informer.go:247] Caches are synced for node_authorizer 

I0828 04:53:20.117898       6 apf_controller.go:299] Running API Priority and Fairness config worker

I0828 04:53:20.118549       6 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 

I0828 04:53:20.118868       6 cache.go:39] Caches are synced for AvailableConditionController controller

I0828 04:53:20.120739       6 cache.go:39] Caches are synced for APIServiceRegistrationController controller

I0828 04:53:20.136043       6 cache.go:39] Caches are synced for autoregister controller

I0828 04:53:20.136159       6 shared_informer.go:247] Caches are synced for crd-autoregister 

I0828 04:53:20.487587       6 node.go:172] Successfully retrieved node IP: 172.29.0.2

I0828 04:53:20.487771       6 server_others.go:141] Detected node IP 172.29.0.2

I0828 04:53:20.489038       6 server_others.go:207] kube-proxy running in dual-stack mode, IPv4-primary

I0828 04:53:20.489173       6 server_others.go:213] Using iptables Proxier.

I0828 04:53:20.489229       6 server_others.go:220] creating dualStackProxier for iptables.

W0828 04:53:20.489312       6 server_others.go:513] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6

I0828 04:53:20.489677       6 server.go:643] Version: v1.21.0+k3s1

W0828 04:53:20.490192       6 sysinfo.go:203] Nodes topology is not available, providing CPU topology

I0828 04:53:20.490925       6 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072

F0828 04:53:20.490994       6 server.go:489] open /proc/sys/net/netfilter/nf_conntrack_max: permission denied

time="2021-08-28T04:53:22.642544100Z" level=info msg="Starting k3s v1.21.0+k3s1 (2705431d)"

time="2021-08-28T04:53:22.643058800Z" level=info msg="Cluster bootstrap already complete"

time="2021-08-28T04:53:22.654214100Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"

time="2021-08-28T04:53:22.654253500Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."

time="2021-08-28T04:53:22.654549700Z" level=info msg="Database tables and indexes are up to date"

time="2021-08-28T04:53:22.655447100Z" level=info msg="Kine listening on unix://kine.sock"

time="2021-08-28T04:53:22.655613700Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.

I0828 04:53:22.656825       7 server.go:656] external host was not specified, using 172.29.0.2

I0828 04:53:22.657051       7 server.go:195] Version: v1.21.0+k3s1

I0828 04:53:22.660477       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.

I0828 04:53:22.660504       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.

I0828 04:53:22.661911       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.

I0828 04:53:22.661939       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.

I0828 04:53:22.663716       7 shared_informer.go:240] Waiting for caches to sync for node_authorizer

I0828 04:53:22.689712       7 instance.go:283] Using reconciler: lease

I0828 04:53:22.731935       7 rest.go:130] the default service ipfamily for this cluster is: IPv4

W0828 04:53:23.045386       7 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:23.052455       7 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:23.054999       7 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:23.059022       7 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:23.060857       7 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:23.065615       7 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources.

W0828 04:53:23.065647       7 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources.

I0828 04:53:23.071351       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.

I0828 04:53:23.071368       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.

time="2021-08-28T04:53:23.088840900Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0"

time="2021-08-28T04:53:23.089503100Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"

time="2021-08-28T04:53:23.090587700Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"

time="2021-08-28T04:53:23.090624200Z" level=info msg="To join node to cluster: k3s agent -s https://172.29.0.2:6443 -t ${NODE_TOKEN}"

time="2021-08-28T04:53:23.091427500Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"

time="2021-08-28T04:53:23.091626800Z" level=info msg="Run: k3s kubectl"

time="2021-08-28T04:53:23.092187900Z" level=info msg="Waiting for API server to become available"

time="2021-08-28T04:53:23.106522500Z" level=info msg="Cluster-Http-Server 2021/08/28 04:53:23 http: TLS handshake error from 127.0.0.1:54900: remote error: tls: bad certificate"

time="2021-08-28T04:53:23.111863200Z" level=info msg="Cluster-Http-Server 2021/08/28 04:53:23 http: TLS handshake error from 127.0.0.1:54910: remote error: tls: bad certificate"

time="2021-08-28T04:53:23.123261100Z" level=info msg="certificate CN=k3d-podiyumm-server-0 signed by CN=k3s-server-ca@1630126376: notBefore=2021-08-28 04:52:56 +0000 UTC notAfter=2022-08-28 04:53:23 +0000 UTC"

time="2021-08-28T04:53:23.126627500Z" level=info msg="certificate CN=system:node:k3d-podiyumm-server-0,O=system:nodes signed by CN=k3s-client-ca@1630126376: notBefore=2021-08-28 04:52:56 +0000 UTC notAfter=2022-08-28 04:53:23 +0000 UTC"

time="2021-08-28T04:53:23.138140500Z" level=info msg="Module overlay was already loaded"

time="2021-08-28T04:53:23.138177400Z" level=info msg="Module nf_conntrack was already loaded"

time="2021-08-28T04:53:23.138638000Z" level=warning msg="Failed to start br_netfilter module"

time="2021-08-28T04:53:23.138953400Z" level=warning msg="Failed to start iptable_nat module"

time="2021-08-28T04:53:23.142833100Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"

time="2021-08-28T04:53:23.143340100Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"

time="2021-08-28T04:53:24.154340700Z" level=info msg="Containerd is now running"

time="2021-08-28T04:53:24.163255400Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"

time="2021-08-28T04:53:24.165929700Z" level=info msg="Handling backend connection request [k3d-podiyumm-server-0]"

time="2021-08-28T04:53:24.166501100Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"

time="2021-08-28T04:53:24.166554700Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-podiyumm-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --runtime-cgroups=/k3s --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"

time="2021-08-28T04:53:24.167174400Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=k3d-podiyumm-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"

Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.

Flag --cni-bin-dir has been deprecated, will be removed along with dockershim.

Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.

Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.

I0828 04:53:24.169390       7 server.go:436] "Kubelet version" kubeletVersion="v1.21.0+k3s1"

W0828 04:53:24.187997       7 server.go:220] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.

W0828 04:53:24.188486       7 proxier.go:653] Failed to read file /lib/modules/5.10.47-linuxkit/modules.builtin with error open /lib/modules/5.10.47-linuxkit/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:24.188994       7 proxier.go:663] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:24.189401       7 proxier.go:663] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

I0828 04:53:24.189698       7 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt

W0828 04:53:24.189978       7 proxier.go:663] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:24.190347       7 proxier.go:663] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:24.190775       7 proxier.go:663] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

E0828 04:53:24.199970       7 node.go:161] Failed to retrieve node info: nodes "k3d-podiyumm-server-0" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope

I0828 04:53:24.933533       7 secure_serving.go:197] Serving securely on 127.0.0.1:6444

I0828 04:53:24.933732       7 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key

I0828 04:53:24.934012       7 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt

I0828 04:53:24.934184       7 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key

I0828 04:53:24.934311       7 tlsconfig.go:240] Starting DynamicServingCertificateController

I0828 04:53:24.935102       7 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller

I0828 04:53:24.935231       7 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller

I0828 04:53:24.935680       7 apiservice_controller.go:97] Starting APIServiceRegistrationController

I0828 04:53:24.935849       7 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller

I0828 04:53:24.936003       7 available_controller.go:475] Starting AvailableConditionController

I0828 04:53:24.936122       7 cache.go:32] Waiting for caches to sync for AvailableConditionController controller

I0828 04:53:24.936242       7 controller.go:83] Starting OpenAPI AggregationController

I0828 04:53:24.938898       7 customresource_discovery_controller.go:209] Starting DiscoveryController

I0828 04:53:24.939717       7 autoregister_controller.go:141] Starting autoregister controller

I0828 04:53:24.939893       7 cache.go:32] Waiting for caches to sync for autoregister controller

I0828 04:53:24.940021       7 apf_controller.go:294] Starting API Priority and Fairness config controller

I0828 04:53:24.940309       7 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt

I0828 04:53:24.941011       7 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt

I0828 04:53:24.941156       7 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt

I0828 04:53:24.943406       7 controller.go:86] Starting OpenAPI controller

I0828 04:53:24.943528       7 naming_controller.go:291] Starting NamingConditionController

I0828 04:53:24.943707       7 establishing_controller.go:76] Starting EstablishingController

I0828 04:53:24.943878       7 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController

I0828 04:53:24.944000       7 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController

I0828 04:53:24.944157       7 crd_finalizer.go:266] Starting CRDFinalizer

I0828 04:53:24.950216       7 crdregistration_controller.go:111] Starting crd-autoregister controller

I0828 04:53:24.950322       7 shared_informer.go:240] Waiting for caches to sync for crd-autoregister

time="2021-08-28T04:53:25.004176600Z" level=info msg="Running cloud-controller-manager with args --profiling=false"

I0828 04:53:25.006273       7 controllermanager.go:142] Version: v1.21.0+k3s1

E0828 04:53:25.027187       7 controller.go:156] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service

I0828 04:53:25.038817       7 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 

I0828 04:53:25.038971       7 cache.go:39] Caches are synced for APIServiceRegistrationController controller

I0828 04:53:25.050756       7 cache.go:39] Caches are synced for AvailableConditionController controller

I0828 04:53:25.053356       7 cache.go:39] Caches are synced for autoregister controller

I0828 04:53:25.053581       7 apf_controller.go:299] Running API Priority and Fairness config worker

I0828 04:53:25.074057       7 shared_informer.go:247] Caches are synced for node_authorizer 

I0828 04:53:25.074565       7 shared_informer.go:247] Caches are synced for crd-autoregister 

I0828 04:53:25.364344       7 node.go:172] Successfully retrieved node IP: 172.29.0.2

I0828 04:53:25.364814       7 server_others.go:141] Detected node IP 172.29.0.2

I0828 04:53:25.366782       7 server_others.go:207] kube-proxy running in dual-stack mode, IPv4-primary

I0828 04:53:25.366923       7 server_others.go:213] Using iptables Proxier.

I0828 04:53:25.367061       7 server_others.go:220] creating dualStackProxier for iptables.

W0828 04:53:25.367199       7 server_others.go:513] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6

I0828 04:53:25.367646       7 server.go:643] Version: v1.21.0+k3s1

W0828 04:53:25.368157       7 sysinfo.go:203] Nodes topology is not available, providing CPU topology

I0828 04:53:25.369146       7 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072

F0828 04:53:25.369330       7 server.go:489] open /proc/sys/net/netfilter/nf_conntrack_max: permission denied

time="2021-08-28T04:53:29.127426600Z" level=info msg="Starting k3s v1.21.0+k3s1 (2705431d)"

time="2021-08-28T04:53:29.127906200Z" level=info msg="Cluster bootstrap already complete"

time="2021-08-28T04:53:29.137938000Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"

time="2021-08-28T04:53:29.137975100Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."

time="2021-08-28T04:53:29.138227800Z" level=info msg="Database tables and indexes are up to date"

time="2021-08-28T04:53:29.139035300Z" level=info msg="Kine listening on unix://kine.sock"

time="2021-08-28T04:53:29.139208000Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.

I0828 04:53:29.140417       7 server.go:656] external host was not specified, using 172.29.0.2

I0828 04:53:29.140769       7 server.go:195] Version: v1.21.0+k3s1

I0828 04:53:29.145697       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.

I0828 04:53:29.145742       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.

I0828 04:53:29.147588       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.

I0828 04:53:29.147616       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.

I0828 04:53:29.149520       7 shared_informer.go:240] Waiting for caches to sync for node_authorizer

I0828 04:53:29.178000       7 instance.go:283] Using reconciler: lease

I0828 04:53:29.219099       7 rest.go:130] the default service ipfamily for this cluster is: IPv4

W0828 04:53:29.535799       7 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:29.544342       7 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:29.547516       7 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:29.552869       7 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:29.555106       7 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:29.561770       7 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources.

W0828 04:53:29.561861       7 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources.

I0828 04:53:29.569271       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.

I0828 04:53:29.569413       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.

time="2021-08-28T04:53:29.582922600Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0"

time="2021-08-28T04:53:29.583558600Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"

time="2021-08-28T04:53:29.584623500Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"

time="2021-08-28T04:53:29.584658800Z" level=info msg="To join node to cluster: k3s agent -s https://172.29.0.2:6443 -t ${NODE_TOKEN}"

time="2021-08-28T04:53:29.585573700Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"

time="2021-08-28T04:53:29.585635200Z" level=info msg="Run: k3s kubectl"

time="2021-08-28T04:53:29.586293100Z" level=info msg="Waiting for API server to become available"

time="2021-08-28T04:53:29.600990000Z" level=info msg="Cluster-Http-Server 2021/08/28 04:53:29 http: TLS handshake error from 127.0.0.1:54994: remote error: tls: bad certificate"

time="2021-08-28T04:53:29.605299300Z" level=info msg="Cluster-Http-Server 2021/08/28 04:53:29 http: TLS handshake error from 127.0.0.1:55004: remote error: tls: bad certificate"

time="2021-08-28T04:53:29.620769000Z" level=info msg="certificate CN=k3d-podiyumm-server-0 signed by CN=k3s-server-ca@1630126376: notBefore=2021-08-28 04:52:56 +0000 UTC notAfter=2022-08-28 04:53:29 +0000 UTC"

time="2021-08-28T04:53:29.627369400Z" level=info msg="certificate CN=system:node:k3d-podiyumm-server-0,O=system:nodes signed by CN=k3s-client-ca@1630126376: notBefore=2021-08-28 04:52:56 +0000 UTC notAfter=2022-08-28 04:53:29 +0000 UTC"

time="2021-08-28T04:53:29.652059800Z" level=info msg="Module overlay was already loaded"

time="2021-08-28T04:53:29.652110100Z" level=info msg="Module nf_conntrack was already loaded"

time="2021-08-28T04:53:29.652930200Z" level=warning msg="Failed to start br_netfilter module"

time="2021-08-28T04:53:29.653494000Z" level=warning msg="Failed to start iptable_nat module"

time="2021-08-28T04:53:29.663553100Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"

time="2021-08-28T04:53:29.664149700Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"

time="2021-08-28T04:53:30.676013500Z" level=info msg="Containerd is now running"

time="2021-08-28T04:53:30.685038600Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"

time="2021-08-28T04:53:30.686748500Z" level=info msg="Handling backend connection request [k3d-podiyumm-server-0]"

time="2021-08-28T04:53:30.687325400Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"

time="2021-08-28T04:53:30.687375400Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-podiyumm-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --runtime-cgroups=/k3s --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"

time="2021-08-28T04:53:30.688005000Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=k3d-podiyumm-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"

Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.

Flag --cni-bin-dir has been deprecated, will be removed along with dockershim.

Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.

Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.

I0828 04:53:30.690534       7 server.go:436] "Kubelet version" kubeletVersion="v1.21.0+k3s1"

W0828 04:53:30.703569       7 server.go:220] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.

W0828 04:53:30.704061       7 proxier.go:653] Failed to read file /lib/modules/5.10.47-linuxkit/modules.builtin with error open /lib/modules/5.10.47-linuxkit/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:30.704649       7 proxier.go:663] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:30.715413       7 proxier.go:663] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:30.716091       7 proxier.go:663] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:30.716467       7 proxier.go:663] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:30.716850       7 proxier.go:663] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

I0828 04:53:30.722571       7 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt

E0828 04:53:30.727770       7 node.go:161] Failed to retrieve node info: nodes "k3d-podiyumm-server-0" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope

I0828 04:53:31.409914       7 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt

I0828 04:53:31.410086       7 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt

I0828 04:53:31.410283       7 secure_serving.go:197] Serving securely on 127.0.0.1:6444

I0828 04:53:31.410430       7 apf_controller.go:294] Starting API Priority and Fairness config controller

I0828 04:53:31.410473       7 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key

I0828 04:53:31.410628       7 customresource_discovery_controller.go:209] Starting DiscoveryController

I0828 04:53:31.410801       7 available_controller.go:475] Starting AvailableConditionController

I0828 04:53:31.410817       7 cache.go:32] Waiting for caches to sync for AvailableConditionController controller

I0828 04:53:31.410858       7 autoregister_controller.go:141] Starting autoregister controller

I0828 04:53:31.410869       7 cache.go:32] Waiting for caches to sync for autoregister controller

I0828 04:53:31.411195       7 tlsconfig.go:240] Starting DynamicServingCertificateController

I0828 04:53:31.412142       7 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller

I0828 04:53:31.412253       7 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller

I0828 04:53:31.413327       7 controller.go:83] Starting OpenAPI AggregationController

I0828 04:53:31.413783       7 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key

I0828 04:53:31.413926       7 apiservice_controller.go:97] Starting APIServiceRegistrationController

I0828 04:53:31.413950       7 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller

I0828 04:53:31.414322       7 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt

I0828 04:53:31.414459       7 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt

I0828 04:53:31.415650       7 controller.go:86] Starting OpenAPI controller

I0828 04:53:31.415694       7 naming_controller.go:291] Starting NamingConditionController

I0828 04:53:31.415819       7 establishing_controller.go:76] Starting EstablishingController

I0828 04:53:31.415861       7 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController

I0828 04:53:31.416003       7 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController

I0828 04:53:31.416104       7 crd_finalizer.go:266] Starting CRDFinalizer

I0828 04:53:31.416296       7 crdregistration_controller.go:111] Starting crd-autoregister controller

I0828 04:53:31.416319       7 shared_informer.go:240] Waiting for caches to sync for crd-autoregister

time="2021-08-28T04:53:31.473079300Z" level=info msg="Running cloud-controller-manager with args --profiling=false"

I0828 04:53:31.475325       7 controllermanager.go:142] Version: v1.21.0+k3s1

E0828 04:53:31.496522       7 controller.go:156] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service

I0828 04:53:31.510586       7 apf_controller.go:299] Running API Priority and Fairness config worker

I0828 04:53:31.511348       7 cache.go:39] Caches are synced for autoregister controller

I0828 04:53:31.512332       7 cache.go:39] Caches are synced for AvailableConditionController controller

I0828 04:53:31.512479       7 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 

I0828 04:53:31.514048       7 cache.go:39] Caches are synced for APIServiceRegistrationController controller

I0828 04:53:31.521566       7 shared_informer.go:247] Caches are synced for crd-autoregister 

I0828 04:53:31.564906       7 shared_informer.go:247] Caches are synced for node_authorizer 

I0828 04:53:31.792139       7 node.go:172] Successfully retrieved node IP: 172.29.0.2

I0828 04:53:31.792313       7 server_others.go:141] Detected node IP 172.29.0.2

I0828 04:53:31.793938       7 server_others.go:207] kube-proxy running in dual-stack mode, IPv4-primary

I0828 04:53:31.794057       7 server_others.go:213] Using iptables Proxier.

I0828 04:53:31.794187       7 server_others.go:220] creating dualStackProxier for iptables.

W0828 04:53:31.794315       7 server_others.go:513] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6

I0828 04:53:31.794805       7 server.go:643] Version: v1.21.0+k3s1

W0828 04:53:31.795204       7 sysinfo.go:203] Nodes topology is not available, providing CPU topology

I0828 04:53:31.795971       7 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072

F0828 04:53:31.796094       7 server.go:489] open /proc/sys/net/netfilter/nf_conntrack_max: permission denied

time="2021-08-28T04:53:38.785959900Z" level=info msg="Starting k3s v1.21.0+k3s1 (2705431d)"

time="2021-08-28T04:53:38.786379000Z" level=info msg="Cluster bootstrap already complete"

time="2021-08-28T04:53:38.796980300Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"

time="2021-08-28T04:53:38.797019500Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."

time="2021-08-28T04:53:38.797265500Z" level=info msg="Database tables and indexes are up to date"

time="2021-08-28T04:53:38.798367600Z" level=info msg="Kine listening on unix://kine.sock"

time="2021-08-28T04:53:38.798617200Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.

I0828 04:53:38.799553       7 server.go:656] external host was not specified, using 172.29.0.2

I0828 04:53:38.799856       7 server.go:195] Version: v1.21.0+k3s1

I0828 04:53:38.803359       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.

I0828 04:53:38.803387       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.

I0828 04:53:38.804478       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.

I0828 04:53:38.804506       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.

I0828 04:53:38.806795       7 shared_informer.go:240] Waiting for caches to sync for node_authorizer

I0828 04:53:38.835366       7 instance.go:283] Using reconciler: lease

I0828 04:53:38.881082       7 rest.go:130] the default service ipfamily for this cluster is: IPv4

W0828 04:53:39.580008       7 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:39.603256       7 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:39.613146       7 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:39.628555       7 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:39.633475       7 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:39.653620       7 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources.

W0828 04:53:39.653691       7 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources.

I0828 04:53:39.677621       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.

I0828 04:53:39.677783       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.

time="2021-08-28T04:53:39.701100400Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0"

time="2021-08-28T04:53:39.701794100Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"

time="2021-08-28T04:53:39.703655500Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"

time="2021-08-28T04:53:39.703708500Z" level=info msg="To join node to cluster: k3s agent -s https://172.29.0.2:6443 -t ${NODE_TOKEN}"

time="2021-08-28T04:53:39.704922700Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"

time="2021-08-28T04:53:39.705008800Z" level=info msg="Run: k3s kubectl"

time="2021-08-28T04:53:39.705921300Z" level=info msg="Waiting for API server to become available"

time="2021-08-28T04:53:39.727019300Z" level=info msg="Cluster-Http-Server 2021/08/28 04:53:39 http: TLS handshake error from 127.0.0.1:55088: remote error: tls: bad certificate"

time="2021-08-28T04:53:39.730930700Z" level=info msg="Cluster-Http-Server 2021/08/28 04:53:39 http: TLS handshake error from 127.0.0.1:55098: remote error: tls: bad certificate"

time="2021-08-28T04:53:39.742857700Z" level=info msg="certificate CN=k3d-podiyumm-server-0 signed by CN=k3s-server-ca@1630126376: notBefore=2021-08-28 04:52:56 +0000 UTC notAfter=2022-08-28 04:53:39 +0000 UTC"

time="2021-08-28T04:53:39.746346000Z" level=info msg="certificate CN=system:node:k3d-podiyumm-server-0,O=system:nodes signed by CN=k3s-client-ca@1630126376: notBefore=2021-08-28 04:52:56 +0000 UTC notAfter=2022-08-28 04:53:39 +0000 UTC"

time="2021-08-28T04:53:39.763704400Z" level=info msg="Module overlay was already loaded"

time="2021-08-28T04:53:39.763754000Z" level=info msg="Module nf_conntrack was already loaded"

time="2021-08-28T04:53:39.764337800Z" level=warning msg="Failed to start br_netfilter module"

time="2021-08-28T04:53:39.764672000Z" level=warning msg="Failed to start iptable_nat module"

time="2021-08-28T04:53:39.772363200Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"

time="2021-08-28T04:53:39.772961100Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"

time="2021-08-28T04:53:40.812989800Z" level=info msg="Containerd is now running"

time="2021-08-28T04:53:40.820123200Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"

time="2021-08-28T04:53:40.822058300Z" level=info msg="Handling backend connection request [k3d-podiyumm-server-0]"

time="2021-08-28T04:53:40.822794800Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"

time="2021-08-28T04:53:40.823020700Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-podiyumm-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --runtime-cgroups=/k3s --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"

time="2021-08-28T04:53:40.823588000Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=k3d-podiyumm-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"

Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.

Flag --cni-bin-dir has been deprecated, will be removed along with dockershim.

Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.

Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.

I0828 04:53:40.834364       7 server.go:436] "Kubelet version" kubeletVersion="v1.21.0+k3s1"

W0828 04:53:40.850010       7 server.go:220] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.

W0828 04:53:40.850256       7 proxier.go:653] Failed to read file /lib/modules/5.10.47-linuxkit/modules.builtin with error open /lib/modules/5.10.47-linuxkit/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:40.850851       7 proxier.go:663] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:40.851266       7 proxier.go:663] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:40.851725       7 proxier.go:663] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:40.852256       7 proxier.go:663] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:40.852666       7 proxier.go:663] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

I0828 04:53:40.863319       7 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt

E0828 04:53:40.869782       7 node.go:161] Failed to retrieve node info: nodes "k3d-podiyumm-server-0" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope

E0828 04:53:42.012985       7 node.go:161] Failed to retrieve node info: nodes "k3d-podiyumm-server-0" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope

I0828 04:53:42.170904       7 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt

I0828 04:53:42.170976       7 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt

I0828 04:53:42.171420       7 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key

I0828 04:53:42.171750       7 secure_serving.go:197] Serving securely on 127.0.0.1:6444

I0828 04:53:42.171872       7 tlsconfig.go:240] Starting DynamicServingCertificateController

I0828 04:53:42.172053       7 available_controller.go:475] Starting AvailableConditionController

I0828 04:53:42.172078       7 cache.go:32] Waiting for caches to sync for AvailableConditionController controller

I0828 04:53:42.172519       7 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller

I0828 04:53:42.172693       7 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller

I0828 04:53:42.172876       7 autoregister_controller.go:141] Starting autoregister controller

I0828 04:53:42.173003       7 cache.go:32] Waiting for caches to sync for autoregister controller

I0828 04:53:42.173196       7 apf_controller.go:294] Starting API Priority and Fairness config controller

I0828 04:53:42.173376       7 customresource_discovery_controller.go:209] Starting DiscoveryController

I0828 04:53:42.174458       7 apiservice_controller.go:97] Starting APIServiceRegistrationController

I0828 04:53:42.174572       7 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller

I0828 04:53:42.178094       7 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt

I0828 04:53:42.178239       7 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt

I0828 04:53:42.172576       7 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key

I0828 04:53:42.178924       7 crdregistration_controller.go:111] Starting crd-autoregister controller

I0828 04:53:42.179035       7 shared_informer.go:240] Waiting for caches to sync for crd-autoregister

I0828 04:53:42.180077       7 controller.go:86] Starting OpenAPI controller

I0828 04:53:42.180134       7 naming_controller.go:291] Starting NamingConditionController

I0828 04:53:42.180312       7 establishing_controller.go:76] Starting EstablishingController

I0828 04:53:42.180354       7 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController

I0828 04:53:42.180465       7 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController

I0828 04:53:42.180575       7 crd_finalizer.go:266] Starting CRDFinalizer

I0828 04:53:42.172605       7 controller.go:83] Starting OpenAPI AggregationController

time="2021-08-28T04:53:42.218872700Z" level=info msg="Running cloud-controller-manager with args --profiling=false"

I0828 04:53:42.223124       7 controllermanager.go:142] Version: v1.21.0+k3s1

E0828 04:53:42.261897       7 controller.go:156] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service

I0828 04:53:42.272288       7 cache.go:39] Caches are synced for AvailableConditionController controller

I0828 04:53:42.272880       7 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 

I0828 04:53:42.273351       7 apf_controller.go:299] Running API Priority and Fairness config worker

I0828 04:53:42.274414       7 cache.go:39] Caches are synced for autoregister controller

I0828 04:53:42.275371       7 cache.go:39] Caches are synced for APIServiceRegistrationController controller

I0828 04:53:42.308475       7 shared_informer.go:247] Caches are synced for crd-autoregister 

I0828 04:53:42.308807       7 shared_informer.go:247] Caches are synced for node_authorizer 

I0828 04:53:43.171544       7 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).

I0828 04:53:43.172709       7 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).

I0828 04:53:43.201698       7 storage_scheduling.go:148] all system priority classes are created successfully or already exist.

time="2021-08-28T04:53:43.880235800Z" level=info msg="Waiting for node k3d-podiyumm-server-0 CIDR not assigned yet"

W0828 04:53:43.971359       7 handler_proxy.go:102] no RequestInfo found in the context

E0828 04:53:43.971864       7 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable

, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]

I0828 04:53:43.972369       7 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.

time="2021-08-28T04:53:44.217087400Z" level=info msg="Kube API server is now running"

time="2021-08-28T04:53:44.217152500Z" level=info msg="k3s is up and running"

Flag --address has been deprecated, see --bind-address instead.

I0828 04:53:44.220732       7 controllermanager.go:175] Version: v1.21.0+k3s1

I0828 04:53:44.221351       7 deprecated_insecure_serving.go:53] Serving insecurely on 127.0.0.1:10252

W0828 04:53:44.242688       7 authorization.go:47] Authorization is disabled

W0828 04:53:44.242822       7 authentication.go:47] Authentication is disabled

I0828 04:53:44.242890       7 deprecated_insecure_serving.go:51] Serving healthz insecurely on 127.0.0.1:10251

time="2021-08-28T04:53:44.252607000Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-9.18.2.tgz"

time="2021-08-28T04:53:44.253107000Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-crd-9.18.2.tgz"

time="2021-08-28T04:53:44.253874500Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"

time="2021-08-28T04:53:44.254460300Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml"

time="2021-08-28T04:53:44.254945300Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml"

time="2021-08-28T04:53:44.255373600Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml"

time="2021-08-28T04:53:44.255912000Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml"

time="2021-08-28T04:53:44.256387300Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml"

time="2021-08-28T04:53:44.256977600Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml"

time="2021-08-28T04:53:44.257348500Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml"

time="2021-08-28T04:53:44.257845900Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml"

time="2021-08-28T04:53:44.259103600Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml"

time="2021-08-28T04:53:44.260567000Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"

I0828 04:53:44.301770       7 node.go:172] Successfully retrieved node IP: 172.29.0.2

I0828 04:53:44.302709       7 server_others.go:141] Detected node IP 172.29.0.2

I0828 04:53:44.309189       7 server_others.go:207] kube-proxy running in dual-stack mode, IPv4-primary

I0828 04:53:44.309458       7 server_others.go:213] Using iptables Proxier.

I0828 04:53:44.309640       7 server_others.go:220] creating dualStackProxier for iptables.

W0828 04:53:44.310024       7 server_others.go:513] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6

I0828 04:53:44.312380       7 server.go:643] Version: v1.21.0+k3s1

W0828 04:53:44.315523       7 sysinfo.go:203] Nodes topology is not available, providing CPU topology

I0828 04:53:44.316639       7 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072

F0828 04:53:44.316967       7 server.go:489] open /proc/sys/net/netfilter/nf_conntrack_max: permission denied

time="2021-08-28T04:53:57.738894100Z" level=info msg="Starting k3s v1.21.0+k3s1 (2705431d)"

time="2021-08-28T04:53:57.739626200Z" level=info msg="Cluster bootstrap already complete"

time="2021-08-28T04:53:57.751819900Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"

time="2021-08-28T04:53:57.751865000Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."

time="2021-08-28T04:53:57.752016500Z" level=info msg="Database tables and indexes are up to date"

time="2021-08-28T04:53:57.756477000Z" level=info msg="Kine listening on unix://kine.sock"

time="2021-08-28T04:53:57.757798100Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.

I0828 04:53:57.759999       8 server.go:656] external host was not specified, using 172.29.0.2

I0828 04:53:57.760703       8 server.go:195] Version: v1.21.0+k3s1

I0828 04:53:57.764345       8 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.

I0828 04:53:57.764434       8 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.

I0828 04:53:57.765311       8 shared_informer.go:240] Waiting for caches to sync for node_authorizer

I0828 04:53:57.767050       8 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.

I0828 04:53:57.767161       8 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.

I0828 04:53:57.795551       8 instance.go:283] Using reconciler: lease

I0828 04:53:57.824293       8 rest.go:130] the default service ipfamily for this cluster is: IPv4

W0828 04:53:58.128591       8 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:58.137674       8 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:58.140688       8 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:58.159240       8 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:58.165477       8 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.

W0828 04:53:58.182468       8 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources.

W0828 04:53:58.182500       8 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources.

I0828 04:53:58.190169       8 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.

I0828 04:53:58.190200       8 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.

time="2021-08-28T04:53:58.209216900Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0"

time="2021-08-28T04:53:58.209891800Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"

time="2021-08-28T04:53:58.210985200Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"

time="2021-08-28T04:53:58.211151900Z" level=info msg="To join node to cluster: k3s agent -s https://172.29.0.2:6443 -t ${NODE_TOKEN}"

time="2021-08-28T04:53:58.211862400Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"

time="2021-08-28T04:53:58.212176000Z" level=info msg="Run: k3s kubectl"

time="2021-08-28T04:53:58.213215000Z" level=info msg="Waiting for API server to become available"

time="2021-08-28T04:53:58.238282000Z" level=info msg="Cluster-Http-Server 2021/08/28 04:53:58 http: TLS handshake error from 127.0.0.1:55186: remote error: tls: bad certificate"

time="2021-08-28T04:53:58.243506700Z" level=info msg="Cluster-Http-Server 2021/08/28 04:53:58 http: TLS handshake error from 127.0.0.1:55196: remote error: tls: bad certificate"

time="2021-08-28T04:53:58.255276400Z" level=info msg="certificate CN=k3d-podiyumm-server-0 signed by CN=k3s-server-ca@1630126376: notBefore=2021-08-28 04:52:56 +0000 UTC notAfter=2022-08-28 04:53:58 +0000 UTC"

time="2021-08-28T04:53:58.258786600Z" level=info msg="certificate CN=system:node:k3d-podiyumm-server-0,O=system:nodes signed by CN=k3s-client-ca@1630126376: notBefore=2021-08-28 04:52:56 +0000 UTC notAfter=2022-08-28 04:53:58 +0000 UTC"

time="2021-08-28T04:53:58.270682000Z" level=info msg="Module overlay was already loaded"

time="2021-08-28T04:53:58.270858400Z" level=info msg="Module nf_conntrack was already loaded"

time="2021-08-28T04:53:58.272704800Z" level=warning msg="Failed to start br_netfilter module"

time="2021-08-28T04:53:58.274613800Z" level=warning msg="Failed to start iptable_nat module"

time="2021-08-28T04:53:58.278762500Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"

time="2021-08-28T04:53:58.279429400Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"

time="2021-08-28T04:53:59.302022500Z" level=info msg="Containerd is now running"

time="2021-08-28T04:53:59.318662800Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"

time="2021-08-28T04:53:59.321125300Z" level=info msg="Handling backend connection request [k3d-podiyumm-server-0]"

time="2021-08-28T04:53:59.321920400Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"

time="2021-08-28T04:53:59.322138100Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-podiyumm-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --runtime-cgroups=/k3s --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"

time="2021-08-28T04:53:59.322983400Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=k3d-podiyumm-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"

Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.

Flag --cni-bin-dir has been deprecated, will be removed along with dockershim.

Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.

Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.

I0828 04:53:59.327428       8 server.go:436] "Kubelet version" kubeletVersion="v1.21.0+k3s1"

W0828 04:53:59.344583       8 server.go:220] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.

W0828 04:53:59.345235       8 proxier.go:653] Failed to read file /lib/modules/5.10.47-linuxkit/modules.builtin with error open /lib/modules/5.10.47-linuxkit/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

I0828 04:53:59.347796       8 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt

W0828 04:53:59.354458       8 proxier.go:663] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:59.354974       8 proxier.go:663] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:59.355365       8 proxier.go:663] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:59.355652       8 proxier.go:663] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:53:59.355995       8 proxier.go:663] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

E0828 04:53:59.373698       8 node.go:161] Failed to retrieve node info: nodes "k3d-podiyumm-server-0" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope

I0828 04:54:00.107174       8 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt

I0828 04:54:00.107267       8 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt

I0828 04:54:00.107502       8 secure_serving.go:197] Serving securely on 127.0.0.1:6444

I0828 04:54:00.107686       8 autoregister_controller.go:141] Starting autoregister controller

I0828 04:54:00.107707       8 cache.go:32] Waiting for caches to sync for autoregister controller

I0828 04:54:00.107791       8 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key

I0828 04:54:00.107859       8 tlsconfig.go:240] Starting DynamicServingCertificateController

I0828 04:54:00.108561       8 customresource_discovery_controller.go:209] Starting DiscoveryController

I0828 04:54:00.108640       8 apf_controller.go:294] Starting API Priority and Fairness config controller

I0828 04:54:00.109025       8 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key

I0828 04:54:00.109076       8 controller.go:83] Starting OpenAPI AggregationController

I0828 04:54:00.115225       8 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller

I0828 04:54:00.115525       8 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller

I0828 04:54:00.116596       8 apiservice_controller.go:97] Starting APIServiceRegistrationController

I0828 04:54:00.116838       8 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller

I0828 04:54:00.117351       8 available_controller.go:475] Starting AvailableConditionController

I0828 04:54:00.117550       8 cache.go:32] Waiting for caches to sync for AvailableConditionController controller

I0828 04:54:00.118324       8 crdregistration_controller.go:111] Starting crd-autoregister controller

I0828 04:54:00.118417       8 shared_informer.go:240] Waiting for caches to sync for crd-autoregister

I0828 04:54:00.119688       8 controller.go:86] Starting OpenAPI controller

I0828 04:54:00.120006       8 naming_controller.go:291] Starting NamingConditionController

I0828 04:54:00.120527       8 establishing_controller.go:76] Starting EstablishingController

I0828 04:54:00.120760       8 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController

I0828 04:54:00.121262       8 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController

I0828 04:54:00.121621       8 crd_finalizer.go:266] Starting CRDFinalizer

I0828 04:54:00.126083       8 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt

I0828 04:54:00.126740       8 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt

time="2021-08-28T04:54:00.185325200Z" level=info msg="Running cloud-controller-manager with args --profiling=false"

I0828 04:54:00.188041       8 controllermanager.go:142] Version: v1.21.0+k3s1

I0828 04:54:00.208111       8 cache.go:39] Caches are synced for autoregister controller

I0828 04:54:00.209545       8 apf_controller.go:299] Running API Priority and Fairness config worker

I0828 04:54:00.216217       8 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 

I0828 04:54:00.217381       8 cache.go:39] Caches are synced for APIServiceRegistrationController controller

E0828 04:54:00.225482       8 controller.go:156] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service

I0828 04:54:00.237977       8 cache.go:39] Caches are synced for AvailableConditionController controller

I0828 04:54:00.240481       8 shared_informer.go:247] Caches are synced for crd-autoregister 

I0828 04:54:00.266840       8 shared_informer.go:247] Caches are synced for node_authorizer 

I0828 04:54:00.572233       8 node.go:172] Successfully retrieved node IP: 172.29.0.2

I0828 04:54:00.572858       8 server_others.go:141] Detected node IP 172.29.0.2

I0828 04:54:00.575624       8 server_others.go:207] kube-proxy running in dual-stack mode, IPv4-primary

I0828 04:54:00.575940       8 server_others.go:213] Using iptables Proxier.

I0828 04:54:00.576456       8 server_others.go:220] creating dualStackProxier for iptables.

W0828 04:54:00.576969       8 server_others.go:513] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6

I0828 04:54:00.578446       8 server.go:643] Version: v1.21.0+k3s1

W0828 04:54:00.579704       8 sysinfo.go:203] Nodes topology is not available, providing CPU topology

I0828 04:54:00.581574       8 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072

F0828 04:54:00.581885       8 server.go:489] open /proc/sys/net/netfilter/nf_conntrack_max: permission denied

time="2021-08-28T04:54:26.712306400Z" level=info msg="Starting k3s v1.21.0+k3s1 (2705431d)"

time="2021-08-28T04:54:26.712926100Z" level=info msg="Cluster bootstrap already complete"

time="2021-08-28T04:54:26.724513600Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"

time="2021-08-28T04:54:26.724557700Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."

time="2021-08-28T04:54:26.724755900Z" level=info msg="Database tables and indexes are up to date"

time="2021-08-28T04:54:26.725721200Z" level=info msg="Kine listening on unix://kine.sock"

time="2021-08-28T04:54:26.725836500Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.

I0828 04:54:26.726647       7 server.go:656] external host was not specified, using 172.29.0.2

I0828 04:54:26.726816       7 server.go:195] Version: v1.21.0+k3s1

I0828 04:54:26.732725       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.

I0828 04:54:26.732760       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.

I0828 04:54:26.733852       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.

I0828 04:54:26.733884       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.

I0828 04:54:26.735567       7 shared_informer.go:240] Waiting for caches to sync for node_authorizer

I0828 04:54:26.762947       7 instance.go:283] Using reconciler: lease

I0828 04:54:26.816351       7 rest.go:130] the default service ipfamily for this cluster is: IPv4

W0828 04:54:27.165884       7 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources.

W0828 04:54:27.173425       7 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.

W0828 04:54:27.176609       7 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.

W0828 04:54:27.181382       7 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources.

W0828 04:54:27.184172       7 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.

W0828 04:54:27.190118       7 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources.

W0828 04:54:27.190147       7 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources.

I0828 04:54:27.197910       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.

I0828 04:54:27.197941       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.

time="2021-08-28T04:54:27.213402000Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0"

time="2021-08-28T04:54:27.214542900Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"

time="2021-08-28T04:54:27.217304800Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"

time="2021-08-28T04:54:27.217449500Z" level=info msg="To join node to cluster: k3s agent -s https://172.29.0.2:6443 -t ${NODE_TOKEN}"

time="2021-08-28T04:54:27.219166200Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"

time="2021-08-28T04:54:27.219755500Z" level=info msg="Run: k3s kubectl"

time="2021-08-28T04:54:27.222373400Z" level=info msg="Waiting for API server to become available"

time="2021-08-28T04:54:27.247057800Z" level=info msg="Cluster-Http-Server 2021/08/28 04:54:27 http: TLS handshake error from 127.0.0.1:55308: remote error: tls: bad certificate"

time="2021-08-28T04:54:27.251623200Z" level=info msg="Cluster-Http-Server 2021/08/28 04:54:27 http: TLS handshake error from 127.0.0.1:55314: remote error: tls: bad certificate"

time="2021-08-28T04:54:27.265752700Z" level=info msg="certificate CN=k3d-podiyumm-server-0 signed by CN=k3s-server-ca@1630126376: notBefore=2021-08-28 04:52:56 +0000 UTC notAfter=2022-08-28 04:54:27 +0000 UTC"

time="2021-08-28T04:54:27.272011300Z" level=info msg="certificate CN=system:node:k3d-podiyumm-server-0,O=system:nodes signed by CN=k3s-client-ca@1630126376: notBefore=2021-08-28 04:52:56 +0000 UTC notAfter=2022-08-28 04:54:27 +0000 UTC"

time="2021-08-28T04:54:27.293798300Z" level=info msg="Module overlay was already loaded"

time="2021-08-28T04:54:27.294210700Z" level=info msg="Module nf_conntrack was already loaded"

time="2021-08-28T04:54:27.295798300Z" level=warning msg="Failed to start br_netfilter module"

time="2021-08-28T04:54:27.297111200Z" level=warning msg="Failed to start iptable_nat module"

time="2021-08-28T04:54:27.317262600Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"

time="2021-08-28T04:54:27.318387900Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"

time="2021-08-28T04:54:28.344518800Z" level=info msg="Containerd is now running"

time="2021-08-28T04:54:28.354737400Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"

time="2021-08-28T04:54:28.357163500Z" level=info msg="Handling backend connection request [k3d-podiyumm-server-0]"

time="2021-08-28T04:54:28.358177600Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"

time="2021-08-28T04:54:28.358228700Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-podiyumm-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --runtime-cgroups=/k3s --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"

time="2021-08-28T04:54:28.359282000Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=k3d-podiyumm-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"

Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.

Flag --cni-bin-dir has been deprecated, will be removed along with dockershim.

Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.

Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.

I0828 04:54:28.363730       7 server.go:436] "Kubelet version" kubeletVersion="v1.21.0+k3s1"

W0828 04:54:28.376708       7 server.go:220] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.

W0828 04:54:28.376945       7 proxier.go:653] Failed to read file /lib/modules/5.10.47-linuxkit/modules.builtin with error open /lib/modules/5.10.47-linuxkit/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:54:28.377711       7 proxier.go:663] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:54:28.378380       7 proxier.go:663] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:54:28.379020       7 proxier.go:663] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:54:28.379594       7 proxier.go:663] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

W0828 04:54:28.379974       7 proxier.go:663] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

I0828 04:54:28.390245       7 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt

E0828 04:54:28.395834       7 node.go:161] Failed to retrieve node info: nodes "k3d-podiyumm-server-0" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope

I0828 04:54:29.104057       7 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt

I0828 04:54:29.104127       7 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt

I0828 04:54:29.105232       7 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key

I0828 04:54:29.106833       7 secure_serving.go:197] Serving securely on 127.0.0.1:6444

I0828 04:54:29.106897       7 tlsconfig.go:240] Starting DynamicServingCertificateController

I0828 04:54:29.112192       7 customresource_discovery_controller.go:209] Starting DiscoveryController

I0828 04:54:29.117885       7 apf_controller.go:294] Starting API Priority and Fairness config controller

I0828 04:54:29.117977       7 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key

I0828 04:54:29.118103       7 available_controller.go:475] Starting AvailableConditionController

I0828 04:54:29.118139       7 cache.go:32] Waiting for caches to sync for AvailableConditionController controller

I0828 04:54:29.118206       7 controller.go:83] Starting OpenAPI AggregationController

I0828 04:54:29.118761       7 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller

I0828 04:54:29.118774       7 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller

I0828 04:54:29.119088       7 apiservice_controller.go:97] Starting APIServiceRegistrationController

I0828 04:54:29.119100       7 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller

I0828 04:54:29.119126       7 crdregistration_controller.go:111] Starting crd-autoregister controller

I0828 04:54:29.119135       7 shared_informer.go:240] Waiting for caches to sync for crd-autoregister

I0828 04:54:29.121884       7 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt

I0828 04:54:29.122268       7 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt

I0828 04:54:29.123951       7 controller.go:86] Starting OpenAPI controller

I0828 04:54:29.124165       7 naming_controller.go:291] Starting NamingConditionController

I0828 04:54:29.124208       7 establishing_controller.go:76] Starting EstablishingController

I0828 04:54:29.124483       7 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController

I0828 04:54:29.124726       7 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController

I0828 04:54:29.125142       7 crd_finalizer.go:266] Starting CRDFinalizer

I0828 04:54:29.125333       7 autoregister_controller.go:141] Starting autoregister controller

I0828 04:54:29.125343       7 cache.go:32] Waiting for caches to sync for autoregister controller

time="2021-08-28T04:54:29.184924000Z" level=info msg="Running cloud-controller-manager with args --profiling=false"

I0828 04:54:29.187396       7 shared_informer.go:247] Caches are synced for node_authorizer 

I0828 04:54:29.189083       7 controllermanager.go:142] Version: v1.21.0+k3s1

E0828 04:54:29.215479       7 controller.go:156] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service

I0828 04:54:29.218117       7 apf_controller.go:299] Running API Priority and Fairness config worker

I0828 04:54:29.218417       7 cache.go:39] Caches are synced for AvailableConditionController controller

I0828 04:54:29.219434       7 shared_informer.go:247] Caches are synced for crd-autoregister 

I0828 04:54:29.219571       7 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 

I0828 04:54:29.219931       7 cache.go:39] Caches are synced for APIServiceRegistrationController controller

I0828 04:54:29.236918       7 cache.go:39] Caches are synced for autoregister controller

I0828 04:54:29.481273       7 node.go:172] Successfully retrieved node IP: 172.29.0.2

I0828 04:54:29.481643       7 server_others.go:141] Detected node IP 172.29.0.2

I0828 04:54:29.489927       7 server_others.go:207] kube-proxy running in dual-stack mode, IPv4-primary

I0828 04:54:29.490027       7 server_others.go:213] Using iptables Proxier.

I0828 04:54:29.490082       7 server_others.go:220] creating dualStackProxier for iptables.

W0828 04:54:29.490433       7 server_others.go:513] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6

I0828 04:54:29.491803       7 server.go:643] Version: v1.21.0+k3s1

W0828 04:54:29.492764       7 sysinfo.go:203] Nodes topology is not available, providing CPU topology

I0828 04:54:29.494130       7 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072

F0828 04:54:29.494318       7 server.go:489] open /proc/sys/net/netfilter/nf_conntrack_max: permission denied

Which OS & Architecture

MacOS

sw_vers
ProductName:    macOS
ProductVersion: 11.5.2
BuildVersion:   20G95

Which version of k3d

k3d version
k3d version v4.4.8
k3s version latest (default)

Which version of docker

docker version
Client:
 Cloud integration: 1.0.17
 Version:           20.10.8
 API version:       1.41
 Go version:        go1.16.6
 Git commit:        3967b7d
 Built:             Fri Jul 30 19:55:20 2021
 OS/Arch:           darwin/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.8
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.6
  Git commit:       75249d8
  Built:            Fri Jul 30 19:52:10 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.9
  GitCommit:        e25210fe30a0a703442421b0f60afac609f950a3
 runc:
  Version:          1.0.1
  GitCommit:        v1.0.1-0-g4144b63
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
docker info
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Build with BuildKit (Docker Inc., v0.6.1-docker)
  compose: Docker Compose (Docker Inc., v2.0.0-rc.1)
  scan: Docker Scan (Docker Inc., v0.8.0)

Server:
 Containers: 3
  Running: 2
  Paused: 0
  Stopped: 1
 Images: 66
 Server Version: 20.10.8
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc io.containerd.runc.v2 io.containerd.runtime.v1.linux
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: e25210fe30a0a703442421b0f60afac609f950a3
 runc version: v1.0.1-0-g4144b63
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 5.10.47-linuxkit
 Operating System: Docker Desktop
 OSType: linux
 Architecture: x86_64
 CPUs: 2
 Total Memory: 5.805GiB
 Name: docker-desktop
 ID: ZBK7:WFRM:6FHA:3KK4:JSWH:UG5Y:7733:SLWW:ASOZ:ABWJ:PTS2:ZHB4
 Docker Root Dir: /var/lib/docker
 Debug Mode: true
  File Descriptors: 65
  Goroutines: 66
  System Time: 2021-08-28T05:03:04.6763703Z
  EventsListeners: 5
 HTTP Proxy: http.docker.internal:3128
 HTTPS Proxy: http.docker.internal:3128
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  10.123.189.18:9443
  127.0.0.0/8
 Live Restore Enabled: false
iwilltry42 commented 3 years ago

Hi @typekpb , thanks for opening this issue! Scanning through the logs you've posted, I can see those two lines indicating the issue:

I0828 04:53:09.477687       7 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072

F0828 04:53:09.477866       7 server.go:489] open /proc/sys/net/netfilter/nf_conntrack_max: permission denied

So this is basically a duplicate of https://github.com/rancher/k3d/issues/612 with a fix described here: https://github.com/rancher/rancher/issues/33300#issuecomment-869273925

I hope this helps. If not, please feel free to reopen this issue :)