k3s-io / k3s

Lightweight Kubernetes
https://k3s.io
Apache License 2.0
28.07k stars 2.35k forks source link

Can't start k3s 0.9.0 on Raspbian - containerd errors due to 127.0.0.1 #828

Closed mzac closed 5 years ago

mzac commented 5 years ago

Describe the bug I am trying to setup a k3s cluster with 3 raspberry Pis and no matter what I try I keep getting an error with containerd on the k3d setup. I have followed all the instructions:

...but nothing seems to work!

INFO[2019-09-20T23:06:04.051446104-04:00] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd
INFO[2019-09-20T23:06:04.068744173-04:00] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused"

To Reproduce Install k3s from scratch and keeps failing

curl -sfL https://get.k3s.io | sh -

Screenshots

root@pi3:~# cat /boot/cmdline.txt
dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory

root@pi3:~# cat /etc/hosts
127.0.0.1       localhost
::1             localhost ip6-localhost ip6-loopback
ff02::1         ip6-allnodes
ff02::2         ip6-allrouters

127.0.0.1       pi3

root@pi3:~# k3s --version
k3s version v0.9.0 (65d87648)
root@pi3:~# k3s --debug server
DEBU[0000] Asset dir /var/lib/rancher/k3s/data/cfe4c1b5d345ab8303068b5b17b73614db9938dae887922f5cd8e7b2b61c0fd6
DEBU[0000] Running /var/lib/rancher/k3s/data/cfe4c1b5d345ab8303068b5b17b73614db9938dae887922f5cd8e7b2b61c0fd6/bin/k3s-server [k3s --debug server]
INFO[2019-09-20T23:05:39.302529421-04:00] Starting k3s v0.9.0 (65d87648)
DEBU[2019-09-20T23:05:39.313939090-04:00] CREATE /registry/health, size=17, lease=0 => rev=477, err=key exists
INFO[2019-09-20T23:05:39.314355910-04:00] Kine listening on unix://kine.sock
INFO[2019-09-20T23:05:39.317450318-04:00] Fetching bootstrap data from etcd
DEBU[2019-09-20T23:05:39.325403810-04:00] GET /k3s/runtime, rev=0 => rev=477, kv=false, err=<nil>
INFO[2019-09-20T23:05:39.336155461-04:00] Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key
I0920 23:05:39.338502    6964 server.go:586] external host was not specified, using 192.168.0.5
I0920 23:05:39.340468    6964 server.go:160] Version: v1.15.4-k3s.1
DEBU[2019-09-20T23:05:39.341112409-04:00] TRIGGERED /registry/health, revision=1, delete=false
DEBU[2019-09-20T23:05:39.341229023-04:00] TRIGGERED /registry/namespaces/default, revision=2, delete=false
DEBU[2019-09-20T23:05:39.341277721-04:00] TRIGGERED /registry/priorityclasses/system-node-critical, revision=3, delete=false
DEBU[2019-09-20T23:05:39.341324283-04:00] TRIGGERED /registry/namespaces/kube-system, revision=4, delete=false
DEBU[2019-09-20T23:05:39.341369387-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v1beta1.apiextensions.k8s.io, revision=7, delete=false
DEBU[2019-09-20T23:05:39.341414335-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v1.apps, revision=8, delete=false
DEBU[2019-09-20T23:05:39.341456209-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v1beta1.apps, revision=9, delete=false
DEBU[2019-09-20T23:05:39.341499074-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v1., revision=10, delete=false
DEBU[2019-09-20T23:05:39.341546313-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v1beta1.admissionregistration.k8s.io, revision=11, delete=false
DEBU[2019-09-20T23:05:39.341609073-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v1beta2.apps, revision=12, delete=false
DEBU[2019-09-20T23:05:39.341671885-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v1beta1.authentication.k8s.io, revision=13, delete=false
DEBU[2019-09-20T23:05:39.341730322-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v1.authentication.k8s.io, revision=14, delete=false
DEBU[2019-09-20T23:05:39.341795270-04:00] TRIGGERED /registry/priorityclasses/system-cluster-critical, revision=16, delete=false
DEBU[2019-09-20T23:05:39.341841988-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v1.autoscaling, revision=18, delete=false
DEBU[2019-09-20T23:05:39.341886936-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v1.authorization.k8s.io, revision=19, delete=false
DEBU[2019-09-20T23:05:39.341930477-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v1beta1.authorization.k8s.io, revision=20, delete=false
DEBU[2019-09-20T23:05:39.341975164-04:00] TRIGGERED /registry/configmaps/kube-system/extension-apiserver-authentication, revision=21, delete=false
DEBU[2019-09-20T23:05:39.342019852-04:00] TRIGGERED /registry/apiextensions.k8s.io/customresourcedefinitions/listenerconfigs.k3s.cattle.io, revision=22, delete=false
DEBU[2019-09-20T23:05:39.342065841-04:00] TRIGGERED /registry/ranges/servicenodeports, revision=23, delete=false
DEBU[2019-09-20T23:05:39.342118393-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v2beta2.autoscaling, revision=24, delete=false
DEBU[2019-09-20T23:05:39.342164642-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v2beta1.autoscaling, revision=25, delete=false
DEBU[2019-09-20T23:05:39.342207975-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v1.batch, revision=27, delete=false
DEBU[2019-09-20T23:05:39.342258183-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v1beta1.certificates.k8s.io, revision=28, delete=false
DEBU[2019-09-20T23:05:39.342303339-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v1beta1.batch, revision=29, delete=false
DEBU[2019-09-20T23:05:39.342347558-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v1.coordination.k8s.io, revision=31, delete=false
DEBU[2019-09-20T23:05:39.342393547-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v1beta1.coordination.k8s.io, revision=32, delete=false
DEBU[2019-09-20T23:05:39.342441047-04:00] TRIGGERED /registry/apiextensions.k8s.io/customresourcedefinitions/addons.k3s.cattle.io, revision=33, delete=false
DEBU[2019-09-20T23:05:39.342485734-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v1beta1.extensions, revision=34, delete=false
DEBU[2019-09-20T23:05:39.342533546-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v1beta1.events.k8s.io, revision=35, delete=false
DEBU[2019-09-20T23:05:39.342577244-04:00] TRIGGERED /registry/namespaces/kube-public, revision=36, delete=false
DEBU[2019-09-20T23:05:39.342621931-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v1.networking.k8s.io, revision=38, delete=false
DEBU[2019-09-20T23:05:39.342667608-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v1beta1.networking.k8s.io, revision=39, delete=false
DEBU[2019-09-20T23:05:39.342711462-04:00] TRIGGERED /registry/apiextensions.k8s.io/customresourcedefinitions/helmcharts.helm.cattle.io, revision=40, delete=false
DEBU[2019-09-20T23:05:39.342756670-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v1beta1.policy, revision=41, delete=false
DEBU[2019-09-20T23:05:39.342824638-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v1beta1.node.k8s.io, revision=42, delete=false
DEBU[2019-09-20T23:05:39.342874169-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v1beta1.rbac.authorization.k8s.io, revision=43, delete=false
DEBU[2019-09-20T23:05:39.342921565-04:00] TRIGGERED /registry/namespaces/kube-node-lease, revision=44, delete=false
DEBU[2019-09-20T23:05:39.342965158-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v1.rbac.authorization.k8s.io, revision=45, delete=false
DEBU[2019-09-20T23:05:39.343009064-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v1.scheduling.k8s.io, revision=46, delete=false
DEBU[2019-09-20T23:05:39.343062449-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v1.storage.k8s.io, revision=47, delete=false
DEBU[2019-09-20T23:05:39.343107241-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v1beta1.scheduling.k8s.io, revision=48, delete=false
DEBU[2019-09-20T23:05:39.343151772-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v1beta1.storage.k8s.io, revision=50, delete=false
DEBU[2019-09-20T23:05:39.343196771-04:00] TRIGGERED /registry/clusterroles/cluster-admin, revision=52, delete=false
DEBU[2019-09-20T23:05:39.343240417-04:00] TRIGGERED /registry/clusterroles/system:discovery, revision=53, delete=false
DEBU[2019-09-20T23:05:39.343283542-04:00] TRIGGERED /registry/clusterroles/system:basic-user, revision=54, delete=false
DEBU[2019-09-20T23:05:39.343326875-04:00] TRIGGERED /registry/clusterroles/system:public-info-viewer, revision=55, delete=false
DEBU[2019-09-20T23:05:39.343403853-04:00] TRIGGERED /registry/clusterroles/system:aggregate-to-admin, revision=59, delete=false
DEBU[2019-09-20T23:05:39.343449426-04:00] TRIGGERED /registry/clusterroles/system:aggregate-to-edit, revision=60, delete=false
DEBU[2019-09-20T23:05:39.343493488-04:00] TRIGGERED /registry/clusterroles/system:aggregate-to-view, revision=62, delete=false
DEBU[2019-09-20T23:05:39.343536717-04:00] TRIGGERED /registry/clusterroles/system:heapster, revision=63, delete=false
DEBU[2019-09-20T23:05:39.343579425-04:00] TRIGGERED /registry/clusterroles/system:node, revision=64, delete=false
DEBU[2019-09-20T23:05:39.343622654-04:00] TRIGGERED /registry/clusterroles/system:node-problem-detector, revision=65, delete=false
DEBU[2019-09-20T23:05:39.343665831-04:00] TRIGGERED /registry/clusterroles/system:node-proxier, revision=66, delete=false
DEBU[2019-09-20T23:05:39.343708591-04:00] TRIGGERED /registry/clusterroles/system:kubelet-api-admin, revision=67, delete=false
DEBU[2019-09-20T23:05:39.343751039-04:00] TRIGGERED /registry/clusterroles/system:node-bootstrapper, revision=68, delete=false
DEBU[2019-09-20T23:05:39.343793538-04:00] TRIGGERED /registry/clusterroles/system:auth-delegator, revision=69, delete=false
DEBU[2019-09-20T23:05:39.343836767-04:00] TRIGGERED /registry/clusterroles/system:kube-aggregator, revision=70, delete=false
DEBU[2019-09-20T23:05:39.343883642-04:00] TRIGGERED /registry/clusterroles/system:kube-controller-manager, revision=71, delete=false
DEBU[2019-09-20T23:05:39.343927392-04:00] TRIGGERED /registry/clusterroles/system:kube-scheduler, revision=72, delete=false
DEBU[2019-09-20T23:05:39.343970152-04:00] TRIGGERED /registry/clusterroles/system:kube-dns, revision=73, delete=false
DEBU[2019-09-20T23:05:39.344013225-04:00] TRIGGERED /registry/clusterroles/system:persistent-volume-provisioner, revision=74, delete=false
DEBU[2019-09-20T23:05:39.344057703-04:00] TRIGGERED /registry/clusterroles/system:csi-external-attacher, revision=75, delete=false
DEBU[2019-09-20T23:05:39.344101141-04:00] TRIGGERED /registry/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient, revision=76, delete=false
DEBU[2019-09-20T23:05:39.344145151-04:00] TRIGGERED /registry/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient, revision=77, delete=false
DEBU[2019-09-20T23:05:39.344189994-04:00] TRIGGERED /registry/clusterroles/system:volume-scheduler, revision=78, delete=false
DEBU[2019-09-20T23:05:39.344233171-04:00] TRIGGERED /registry/clusterroles/system:csi-external-provisioner, revision=79, delete=false
DEBU[2019-09-20T23:05:39.344277702-04:00] TRIGGERED /registry/clusterroles/system:controller:attachdetach-controller, revision=80, delete=false
DEBU[2019-09-20T23:05:39.344320879-04:00] TRIGGERED /registry/clusterroles/system:controller:clusterrole-aggregation-controller, revision=81, delete=false
DEBU[2019-09-20T23:05:39.344364004-04:00] TRIGGERED /registry/clusterroles/system:controller:cronjob-controller, revision=82, delete=false
DEBU[2019-09-20T23:05:39.344407441-04:00] TRIGGERED /registry/k3s.cattle.io/listenerconfigs/kube-system/tls-config, revision=83, delete=false
DEBU[2019-09-20T23:05:39.344450357-04:00] TRIGGERED /registry/clusterroles/system:controller:daemon-set-controller, revision=84, delete=false
DEBU[2019-09-20T23:05:39.344494003-04:00] TRIGGERED /registry/clusterroles/system:controller:deployment-controller, revision=85, delete=false
DEBU[2019-09-20T23:05:39.344544888-04:00] TRIGGERED /registry/clusterroles/system:controller:disruption-controller, revision=86, delete=false
DEBU[2019-09-20T23:05:39.344588065-04:00] TRIGGERED /registry/clusterroles/system:controller:endpoint-controller, revision=87, delete=false
DEBU[2019-09-20T23:05:39.344632023-04:00] TRIGGERED /registry/clusterroles/system:controller:expand-controller, revision=88, delete=false
DEBU[2019-09-20T23:05:39.344674783-04:00] TRIGGERED /registry/clusterroles/system:controller:generic-garbage-collector, revision=90, delete=false
DEBU[2019-09-20T23:05:39.344717751-04:00] TRIGGERED /registry/clusterroles/system:controller:horizontal-pod-autoscaler, revision=91, delete=false
DEBU[2019-09-20T23:05:39.344761189-04:00] TRIGGERED /registry/clusterroles/system:controller:job-controller, revision=92, delete=false
DEBU[2019-09-20T23:05:39.344804574-04:00] TRIGGERED /registry/clusterroles/system:controller:namespace-controller, revision=93, delete=false
DEBU[2019-09-20T23:05:39.344848011-04:00] TRIGGERED /registry/clusterroles/system:controller:node-controller, revision=94, delete=false
DEBU[2019-09-20T23:05:39.344891657-04:00] TRIGGERED /registry/clusterroles/system:controller:persistent-volume-binder, revision=95, delete=false
DEBU[2019-09-20T23:05:39.344935250-04:00] TRIGGERED /registry/clusterroles/system:coredns, revision=96, delete=false
DEBU[2019-09-20T23:05:39.344978010-04:00] TRIGGERED /registry/clusterroles/system:controller:pod-garbage-collector, revision=97, delete=false
DEBU[2019-09-20T23:05:39.345033114-04:00] TRIGGERED /registry/clusterroles/system:controller:replicaset-controller, revision=98, delete=false
DEBU[2019-09-20T23:05:39.345079155-04:00] TRIGGERED /registry/clusterroles/system:controller:replication-controller, revision=99, delete=false
DEBU[2019-09-20T23:05:39.345123374-04:00] TRIGGERED /registry/clusterroles/system:controller:resourcequota-controller, revision=100, delete=false
DEBU[2019-09-20T23:05:39.345166759-04:00] TRIGGERED /registry/clusterroles/system:controller:route-controller, revision=101, delete=false
DEBU[2019-09-20T23:05:39.345211602-04:00] TRIGGERED /registry/clusterroles/system:controller:service-account-controller, revision=102, delete=false
DEBU[2019-09-20T23:05:39.345255977-04:00] TRIGGERED /registry/clusterroles/system:controller:service-controller, revision=103, delete=false
DEBU[2019-09-20T23:05:39.345327852-04:00] TRIGGERED /registry/clusterrolebindings/system:coredns, revision=104, delete=false
DEBU[2019-09-20T23:05:39.345377174-04:00] TRIGGERED /registry/clusterroles/system:controller:statefulset-controller, revision=105, delete=false
DEBU[2019-09-20T23:05:39.345422695-04:00] TRIGGERED /registry/clusterroles/system:controller:ttl-controller, revision=106, delete=false
DEBU[2019-09-20T23:05:39.345466445-04:00] TRIGGERED /registry/clusterroles/system:controller:certificate-controller, revision=107, delete=false
DEBU[2019-09-20T23:05:39.345508684-04:00] TRIGGERED /registry/clusterroles/system:controller:pvc-protection-controller, revision=108, delete=false
DEBU[2019-09-20T23:05:39.345552694-04:00] TRIGGERED /registry/clusterroles/system:controller:pv-protection-controller, revision=109, delete=false
DEBU[2019-09-20T23:05:39.345599621-04:00] TRIGGERED /registry/clusterrolebindings/cluster-admin, revision=111, delete=false
DEBU[2019-09-20T23:05:39.345642329-04:00] TRIGGERED /registry/clusterrolebindings/system:discovery, revision=112, delete=false
DEBU[2019-09-20T23:05:39.345685506-04:00] TRIGGERED /registry/clusterrolebindings/system:basic-user, revision=113, delete=false
DEBU[2019-09-20T23:05:39.345728474-04:00] TRIGGERED /registry/clusterrolebindings/system:public-info-viewer, revision=114, delete=false
DEBU[2019-09-20T23:05:39.345771964-04:00] TRIGGERED /registry/clusterrolebindings/system:node-proxier, revision=115, delete=false
DEBU[2019-09-20T23:05:39.345815036-04:00] TRIGGERED /registry/clusterrolebindings/system:kube-controller-manager, revision=117, delete=false
DEBU[2019-09-20T23:05:39.345862380-04:00] TRIGGERED /registry/clusterrolebindings/system:kube-dns, revision=118, delete=false
DEBU[2019-09-20T23:05:39.345906077-04:00] TRIGGERED /registry/clusterrolebindings/system:kube-scheduler, revision=119, delete=false
DEBU[2019-09-20T23:05:39.345950192-04:00] TRIGGERED /registry/clusterrolebindings/system:volume-scheduler, revision=120, delete=false
DEBU[2019-09-20T23:05:39.345991285-04:00] TRIGGERED /registry/clusterrolebindings/system:node, revision=121, delete=false
DEBU[2019-09-20T23:05:39.346034462-04:00] TRIGGERED /registry/clusterrolebindings/system:controller:attachdetach-controller, revision=122, delete=false
DEBU[2019-09-20T23:05:39.346078055-04:00] TRIGGERED /registry/clusterrolebindings/system:controller:clusterrole-aggregation-controller, revision=123, delete=false
DEBU[2019-09-20T23:05:39.346122378-04:00] TRIGGERED /registry/services/specs/kube-system/kube-dns, revision=125, delete=false
DEBU[2019-09-20T23:05:39.346165242-04:00] TRIGGERED /registry/clusterrolebindings/system:controller:cronjob-controller, revision=126, delete=false
DEBU[2019-09-20T23:05:39.346209044-04:00] TRIGGERED /registry/clusterrolebindings/system:controller:daemon-set-controller, revision=127, delete=false
DEBU[2019-09-20T23:05:39.346252169-04:00] TRIGGERED /registry/k3s.cattle.io/addons/kube-system/coredns, revision=128, delete=false
DEBU[2019-09-20T23:05:39.346295658-04:00] TRIGGERED /registry/clusterrolebindings/system:controller:deployment-controller, revision=129, delete=false
DEBU[2019-09-20T23:05:39.346339252-04:00] TRIGGERED /registry/clusterrolebindings/system:controller:disruption-controller, revision=130, delete=false
DEBU[2019-09-20T23:05:39.346383731-04:00] TRIGGERED /registry/clusterrolebindings/system:controller:endpoint-controller, revision=131, delete=false
DEBU[2019-09-20T23:05:39.346426491-04:00] TRIGGERED /registry/clusterrolebindings/system:controller:expand-controller, revision=132, delete=false
DEBU[2019-09-20T23:05:39.346469824-04:00] TRIGGERED /registry/clusterrolebindings/system:controller:generic-garbage-collector, revision=133, delete=false
DEBU[2019-09-20T23:05:39.346512792-04:00] TRIGGERED /registry/clusterrolebindings/kube-apiserver-kubelet-admin, revision=134, delete=false
DEBU[2019-09-20T23:05:39.346581906-04:00] TRIGGERED /registry/clusterrolebindings/system:controller:horizontal-pod-autoscaler, revision=135, delete=false
DEBU[2019-09-20T23:05:39.346632375-04:00] TRIGGERED /registry/k3s.cattle.io/addons/kube-system/rolebindings, revision=136, delete=false
DEBU[2019-09-20T23:05:39.346678156-04:00] TRIGGERED /registry/clusterrolebindings/system:controller:job-controller, revision=137, delete=false
DEBU[2019-09-20T23:05:39.346721801-04:00] TRIGGERED compact_rev_key, revision=138, delete=false
DEBU[2019-09-20T23:05:39.346765447-04:00] TRIGGERED /registry/clusterrolebindings/system:controller:namespace-controller, revision=139, delete=false
DEBU[2019-09-20T23:05:39.346810499-04:00] TRIGGERED /registry/clusterrolebindings/system:controller:node-controller, revision=141, delete=false
DEBU[2019-09-20T23:05:39.346855238-04:00] TRIGGERED /registry/k3s.cattle.io/addons/kube-system/traefik, revision=142, delete=false
DEBU[2019-09-20T23:05:39.346942998-04:00] TRIGGERED /registry/clusterrolebindings/system:controller:persistent-volume-binder, revision=144, delete=false
DEBU[2019-09-20T23:05:39.346992893-04:00] TRIGGERED /registry/clusterrolebindings/helm-kube-system-traefik, revision=145, delete=false
DEBU[2019-09-20T23:05:39.347036851-04:00] TRIGGERED /registry/clusterrolebindings/system:controller:pod-garbage-collector, revision=146, delete=false
DEBU[2019-09-20T23:05:39.347079924-04:00] TRIGGERED /registry/clusterrolebindings/system:controller:replicaset-controller, revision=147, delete=false
DEBU[2019-09-20T23:05:39.347123674-04:00] TRIGGERED /registry/clusterrolebindings/system:controller:replication-controller, revision=149, delete=false
DEBU[2019-09-20T23:05:39.347167372-04:00] TRIGGERED /registry/clusterrolebindings/system:controller:resourcequota-controller, revision=150, delete=false
DEBU[2019-09-20T23:05:39.347211017-04:00] TRIGGERED /registry/clusterrolebindings/system:controller:route-controller, revision=152, delete=false
DEBU[2019-09-20T23:05:39.347254142-04:00] TRIGGERED /registry/clusterrolebindings/system:controller:service-account-controller, revision=153, delete=false
DEBU[2019-09-20T23:05:39.347297058-04:00] TRIGGERED /registry/clusterrolebindings/system:controller:service-controller, revision=154, delete=false
DEBU[2019-09-20T23:05:39.347340964-04:00] TRIGGERED /registry/clusterrolebindings/system:controller:statefulset-controller, revision=155, delete=false
DEBU[2019-09-20T23:05:39.347384662-04:00] TRIGGERED /registry/helm.cattle.io/helmcharts/kube-system/traefik, revision=156, delete=false
DEBU[2019-09-20T23:05:39.347429245-04:00] TRIGGERED /registry/clusterrolebindings/system:controller:ttl-controller, revision=157, delete=false
DEBU[2019-09-20T23:05:39.347473411-04:00] TRIGGERED /registry/clusterrolebindings/system:controller:certificate-controller, revision=158, delete=false
DEBU[2019-09-20T23:05:39.347518307-04:00] TRIGGERED /registry/clusterrolebindings/system:controller:pvc-protection-controller, revision=159, delete=false
DEBU[2019-09-20T23:05:39.347562213-04:00] TRIGGERED /registry/clusterrolebindings/system:controller:pv-protection-controller, revision=160, delete=false
DEBU[2019-09-20T23:05:39.347606848-04:00] TRIGGERED /registry/roles/kube-system/extension-apiserver-authentication-reader, revision=161, delete=false
DEBU[2019-09-20T23:05:39.347665598-04:00] TRIGGERED /registry/roles/kube-system/system:controller:bootstrap-signer, revision=162, delete=false
DEBU[2019-09-20T23:05:39.347712941-04:00] TRIGGERED /registry/roles/kube-system/system:controller:cloud-provider, revision=163, delete=false
DEBU[2019-09-20T23:05:39.347756691-04:00] TRIGGERED /registry/roles/kube-system/system:controller:token-cleaner, revision=164, delete=false
DEBU[2019-09-20T23:05:39.347800232-04:00] TRIGGERED /registry/roles/kube-system/system::leader-locking-kube-controller-manager, revision=165, delete=false
DEBU[2019-09-20T23:05:39.347844763-04:00] TRIGGERED /registry/roles/kube-system/system::leader-locking-kube-scheduler, revision=166, delete=false
DEBU[2019-09-20T23:05:39.347888148-04:00] TRIGGERED /registry/roles/kube-public/system:controller:bootstrap-signer, revision=167, delete=false
DEBU[2019-09-20T23:05:39.347937783-04:00] TRIGGERED /registry/rolebindings/kube-system/system::extension-apiserver-authentication-reader, revision=168, delete=false
DEBU[2019-09-20T23:05:39.347982731-04:00] TRIGGERED /registry/rolebindings/kube-system/system::leader-locking-kube-controller-manager, revision=169, delete=false
DEBU[2019-09-20T23:05:39.348043772-04:00] TRIGGERED /registry/rolebindings/kube-system/system::leader-locking-kube-scheduler, revision=170, delete=false
DEBU[2019-09-20T23:05:39.348090178-04:00] TRIGGERED /registry/rolebindings/kube-system/system:controller:bootstrap-signer, revision=171, delete=false
DEBU[2019-09-20T23:05:39.348141220-04:00] TRIGGERED /registry/rolebindings/kube-system/system:controller:cloud-provider, revision=172, delete=false
DEBU[2019-09-20T23:05:39.348184761-04:00] TRIGGERED /registry/rolebindings/kube-system/system:controller:token-cleaner, revision=173, delete=false
DEBU[2019-09-20T23:05:39.348228875-04:00] TRIGGERED /registry/rolebindings/kube-public/system:controller:bootstrap-signer, revision=174, delete=false
DEBU[2019-09-20T23:05:39.348272781-04:00] TRIGGERED /registry/ranges/serviceips, revision=175, delete=false
DEBU[2019-09-20T23:05:39.348316010-04:00] TRIGGERED /registry/services/specs/default/kubernetes, revision=176, delete=false
DEBU[2019-09-20T23:05:39.348359135-04:00] TRIGGERED /registry/services/endpoints/default/kubernetes, revision=178, delete=false
DEBU[2019-09-20T23:05:39.348401530-04:00] TRIGGERED /registry/secrets/kube-system/helm-traefik-token-mtsmf, revision=180, delete=false
DEBU[2019-09-20T23:05:39.348444186-04:00] TRIGGERED /registry/secrets/kube-system/coredns-token-kjbhj, revision=181, delete=false
DEBU[2019-09-20T23:05:39.348509707-04:00] TRIGGERED /registry/serviceaccounts/kube-system/helm-traefik, revision=182, delete=false
DEBU[2019-09-20T23:05:39.348554967-04:00] TRIGGERED /registry/serviceaccounts/kube-system/coredns, revision=183, delete=false
DEBU[2019-09-20T23:05:39.348598352-04:00] TRIGGERED /registry/events/default/pi3.15c653cc3f2844ad, revision=185, delete=false
DEBU[2019-09-20T23:05:39.348642310-04:00] TRIGGERED /registry/secrets/kube-system/replication-controller-token-mn6dn, revision=186, delete=false
DEBU[2019-09-20T23:05:39.348684862-04:00] TRIGGERED /registry/events/default/pi3.15c653cc440a7164, revision=187, delete=false
DEBU[2019-09-20T23:05:39.348728508-04:00] TRIGGERED /registry/configmaps/kube-system/coredns, revision=188, delete=false
DEBU[2019-09-20T23:05:39.348771111-04:00] TRIGGERED /registry/serviceaccounts/kube-system/replication-controller, revision=189, delete=false
DEBU[2019-09-20T23:05:39.348813872-04:00] TRIGGERED /registry/events/default/pi3.15c653cc5158b2f7, revision=192, delete=false
DEBU[2019-09-20T23:05:39.348856892-04:00] TRIGGERED /registry/events/default/pi3.15c653cc9f621b63, revision=193, delete=false
DEBU[2019-09-20T23:05:39.348951371-04:00] TRIGGERED /registry/events/default/pi3.15c653cc515993fa, revision=194, delete=false
DEBU[2019-09-20T23:05:39.348994548-04:00] TRIGGERED /registry/events/default/pi3.15c653cc515aadc1, revision=197, delete=false
DEBU[2019-09-20T23:05:39.349037672-04:00] TRIGGERED /registry/events/default/pi3.15c653cc9b980412, revision=198, delete=false
DEBU[2019-09-20T23:05:39.349081682-04:00] TRIGGERED /registry/secrets/kube-system/deployment-controller-token-vlccb, revision=199, delete=false
DEBU[2019-09-20T23:05:39.349125536-04:00] TRIGGERED /registry/serviceaccounts/kube-system/deployment-controller, revision=200, delete=false
DEBU[2019-09-20T23:05:39.349167671-04:00] TRIGGERED /registry/secrets/kube-system/persistent-volume-binder-token-mp6sx, revision=202, delete=false
DEBU[2019-09-20T23:05:39.349211577-04:00] TRIGGERED /registry/serviceaccounts/kube-system/persistent-volume-binder, revision=203, delete=false
DEBU[2019-09-20T23:05:39.349254754-04:00] TRIGGERED /registry/secrets/kube-system/expand-controller-token-gx9fz, revision=205, delete=false
DEBU[2019-09-20T23:05:39.349298035-04:00] TRIGGERED /registry/serviceaccounts/kube-system/expand-controller, revision=206, delete=false
DEBU[2019-09-20T23:05:39.349341889-04:00] TRIGGERED /registry/secrets/kube-system/service-account-controller-token-pqrwn, revision=208, delete=false
DEBU[2019-09-20T23:05:39.349410014-04:00] TRIGGERED /registry/serviceaccounts/kube-system/service-account-controller, revision=209, delete=false
DEBU[2019-09-20T23:05:39.349457982-04:00] TRIGGERED /registry/secrets/kube-system/replicaset-controller-token-2frt2, revision=211, delete=false
DEBU[2019-09-20T23:05:39.349501836-04:00] TRIGGERED /registry/serviceaccounts/kube-system/replicaset-controller, revision=212, delete=false
DEBU[2019-09-20T23:05:39.349545377-04:00] TRIGGERED /registry/secrets/kube-system/certificate-controller-token-9pmld, revision=214, delete=false
DEBU[2019-09-20T23:05:39.349588763-04:00] TRIGGERED /registry/serviceaccounts/kube-system/certificate-controller, revision=215, delete=false
DEBU[2019-09-20T23:05:39.349631471-04:00] TRIGGERED /registry/secrets/kube-system/node-controller-token-d2txj, revision=217, delete=false
DEBU[2019-09-20T23:05:39.349675481-04:00] TRIGGERED /registry/serviceaccounts/kube-system/node-controller, revision=218, delete=false
DEBU[2019-09-20T23:05:39.349719803-04:00] TRIGGERED /registry/secrets/kube-system/pv-protection-controller-token-m4569, revision=220, delete=false
DEBU[2019-09-20T23:05:39.349763397-04:00] TRIGGERED /registry/serviceaccounts/kube-system/pv-protection-controller, revision=221, delete=false
DEBU[2019-09-20T23:05:39.349807095-04:00] TRIGGERED /registry/secrets/kube-system/endpoint-controller-token-k26fr, revision=223, delete=false
DEBU[2019-09-20T23:05:39.349850740-04:00] TRIGGERED /registry/serviceaccounts/kube-system/endpoint-controller, revision=224, delete=false
DEBU[2019-09-20T23:05:39.349902667-04:00] TRIGGERED /registry/secrets/kube-system/disruption-controller-token-nv5sj, revision=226, delete=false
DEBU[2019-09-20T23:05:39.349948760-04:00] TRIGGERED /registry/serviceaccounts/kube-system/disruption-controller, revision=227, delete=false
DEBU[2019-09-20T23:05:39.349997041-04:00] TRIGGERED /registry/secrets/kube-system/pvc-protection-controller-token-pnz6c, revision=230, delete=false
DEBU[2019-09-20T23:05:39.350040374-04:00] TRIGGERED /registry/serviceaccounts/kube-system/pvc-protection-controller, revision=231, delete=false
DEBU[2019-09-20T23:05:39.350084020-04:00] TRIGGERED /registry/events/default/pi3.15c653cef799612b, revision=233, delete=false
DEBU[2019-09-20T23:05:39.350128082-04:00] TRIGGERED /registry/secrets/kube-system/namespace-controller-token-ggwmv, revision=237, delete=false
DEBU[2019-09-20T23:05:39.350171832-04:00] TRIGGERED /registry/serviceaccounts/kube-system/namespace-controller, revision=238, delete=false
DEBU[2019-09-20T23:05:39.350215113-04:00] TRIGGERED /registry/secrets/kube-system/generic-garbage-collector-token-p5gd6, revision=240, delete=false
DEBU[2019-09-20T23:05:39.350258290-04:00] TRIGGERED /registry/serviceaccounts/kube-system/generic-garbage-collector, revision=241, delete=false
DEBU[2019-09-20T23:05:39.350302091-04:00] TRIGGERED /registry/secrets/kube-system/horizontal-pod-autoscaler-token-rb8gl, revision=243, delete=false
DEBU[2019-09-20T23:05:39.350345372-04:00] TRIGGERED /registry/serviceaccounts/kube-system/horizontal-pod-autoscaler, revision=244, delete=false
DEBU[2019-09-20T23:05:39.350389539-04:00] TRIGGERED /registry/secrets/kube-system/statefulset-controller-token-7bcm6, revision=247, delete=false
DEBU[2019-09-20T23:05:39.350443028-04:00] TRIGGERED /registry/serviceaccounts/kube-system/statefulset-controller, revision=248, delete=false
DEBU[2019-09-20T23:05:39.350488236-04:00] TRIGGERED /registry/secrets/kube-system/cronjob-controller-token-t5pcd, revision=250, delete=false
DEBU[2019-09-20T23:05:39.350536205-04:00] TRIGGERED /registry/serviceaccounts/kube-system/cronjob-controller, revision=251, delete=false
DEBU[2019-09-20T23:05:39.350579954-04:00] TRIGGERED /registry/secrets/kube-system/pod-garbage-collector-token-c4dzr, revision=253, delete=false
DEBU[2019-09-20T23:05:39.350623600-04:00] TRIGGERED /registry/serviceaccounts/kube-system/pod-garbage-collector, revision=254, delete=false
DEBU[2019-09-20T23:05:39.350666725-04:00] TRIGGERED /registry/secrets/kube-system/job-controller-token-9hxcl, revision=256, delete=false
DEBU[2019-09-20T23:05:39.350709849-04:00] TRIGGERED /registry/serviceaccounts/kube-system/job-controller, revision=257, delete=false
DEBU[2019-09-20T23:05:39.350753130-04:00] TRIGGERED /registry/secrets/kube-system/ttl-controller-token-ccjnl, revision=259, delete=false
DEBU[2019-09-20T23:05:39.350796411-04:00] TRIGGERED /registry/serviceaccounts/kube-system/ttl-controller, revision=260, delete=false
DEBU[2019-09-20T23:05:39.350849796-04:00] TRIGGERED /registry/secrets/kube-system/attachdetach-controller-token-7j447, revision=262, delete=false
DEBU[2019-09-20T23:05:39.350895265-04:00] TRIGGERED /registry/serviceaccounts/kube-system/attachdetach-controller, revision=263, delete=false
DEBU[2019-09-20T23:05:39.350937504-04:00] TRIGGERED /registry/secrets/kube-system/clusterrole-aggregation-controller-token-6jgk9, revision=265, delete=false
DEBU[2019-09-20T23:05:39.350982191-04:00] TRIGGERED /registry/serviceaccounts/kube-system/clusterrole-aggregation-controller, revision=266, delete=false
DEBU[2019-09-20T23:05:39.351026097-04:00] TRIGGERED /registry/secrets/kube-system/service-controller-token-hndcv, revision=268, delete=false
DEBU[2019-09-20T23:05:39.351101514-04:00] TRIGGERED /registry/serviceaccounts/kube-system/service-controller, revision=269, delete=false
DEBU[2019-09-20T23:05:39.351145992-04:00] TRIGGERED /registry/secrets/kube-system/resourcequota-controller-token-8dtzz, revision=271, delete=false
DEBU[2019-09-20T23:05:39.351189742-04:00] TRIGGERED /registry/serviceaccounts/kube-system/resourcequota-controller, revision=272, delete=false
DEBU[2019-09-20T23:05:39.351232711-04:00] TRIGGERED /registry/secrets/kube-system/daemon-set-controller-token-v4zcr, revision=274, delete=false
DEBU[2019-09-20T23:05:39.351275992-04:00] TRIGGERED /registry/serviceaccounts/kube-system/daemon-set-controller, revision=275, delete=false
DEBU[2019-09-20T23:05:39.351319637-04:00] TRIGGERED /registry/clusterroles/view, revision=283, delete=false
DEBU[2019-09-20T23:05:39.351362033-04:00] TRIGGERED /registry/events/default/pi3.15c653d2c395d230, revision=288, delete=false
DEBU[2019-09-20T23:05:39.351422501-04:00] TRIGGERED /registry/services/endpoints/kube-system/kube-dns, revision=290, delete=false
DEBU[2019-09-20T23:05:39.351470470-04:00] TRIGGERED /registry/events/kube-system/coredns.15c653d2cb5d6e6d, revision=294, delete=false
DEBU[2019-09-20T23:05:39.351514219-04:00] TRIGGERED /registry/clusterroles/edit, revision=295, delete=false
DEBU[2019-09-20T23:05:39.351557188-04:00] TRIGGERED /registry/events/kube-system/helm-install-traefik.15c653d2cd67bb7a, revision=296, delete=false
DEBU[2019-09-20T23:05:39.351600885-04:00] TRIGGERED /registry/jobs/kube-system/helm-install-traefik, revision=298, delete=false
DEBU[2019-09-20T23:05:39.351644479-04:00] TRIGGERED /registry/events/kube-system/coredns-66f496764.15c653d2d368d71f, revision=299, delete=false
DEBU[2019-09-20T23:05:39.351688749-04:00] TRIGGERED /registry/clusterroles/admin, revision=300, delete=false
DEBU[2019-09-20T23:05:39.351731197-04:00] TRIGGERED /registry/events/kube-system/helm-install-traefik-qkh25.15c653d2d9680a48, revision=304, delete=false
DEBU[2019-09-20T23:05:39.351774895-04:00] TRIGGERED /registry/events/kube-system/coredns-66f496764-7p5zb.15c653d2e8ef0e26, revision=305, delete=false
DEBU[2019-09-20T23:05:39.351818020-04:00] TRIGGERED /registry/secrets/kube-node-lease/default-token-v692f, revision=306, delete=false
DEBU[2019-09-20T23:05:39.351860832-04:00] TRIGGERED /registry/replicasets/kube-system/coredns-66f496764, revision=307, delete=false
DEBU[2019-09-20T23:05:39.351903384-04:00] TRIGGERED /registry/secrets/default/default-token-jxj7s, revision=308, delete=false
DEBU[2019-09-20T23:05:39.351946821-04:00] TRIGGERED /registry/pods/kube-system/helm-install-traefik-qkh25, revision=310, delete=false
DEBU[2019-09-20T23:05:39.351993904-04:00] TRIGGERED /registry/secrets/kube-system/default-token-t8scx, revision=311, delete=false
DEBU[2019-09-20T23:05:39.352038174-04:00] TRIGGERED /registry/secrets/kube-public/default-token-9xs6h, revision=312, delete=false
DEBU[2019-09-20T23:05:39.352081195-04:00] TRIGGERED /registry/serviceaccounts/kube-node-lease/default, revision=313, delete=false
DEBU[2019-09-20T23:05:39.352125413-04:00] TRIGGERED /registry/serviceaccounts/default/default, revision=314, delete=false
DEBU[2019-09-20T23:05:39.352168955-04:00] TRIGGERED /registry/deployments/kube-system/coredns, revision=315, delete=false
DEBU[2019-09-20T23:05:39.352211559-04:00] TRIGGERED /registry/serviceaccounts/kube-public/default, revision=316, delete=false
DEBU[2019-09-20T23:05:39.352254215-04:00] TRIGGERED /registry/serviceaccounts/kube-system/default, revision=317, delete=false
DEBU[2019-09-20T23:05:39.352298173-04:00] TRIGGERED /registry/pods/kube-system/coredns-66f496764-7p5zb, revision=318, delete=false
DEBU[2019-09-20T23:05:39.352341297-04:00] TRIGGERED /registry/events/kube-system/coredns-66f496764-7p5zb.15c653d50bf0c3be, revision=321, delete=false
DEBU[2019-09-20T23:05:39.352385985-04:00] TRIGGERED /registry/events/kube-system/helm-install-traefik-qkh25.15c653d51944e9b2, revision=322, delete=false
DEBU[2019-09-20T23:05:39.352430203-04:00] TRIGGERED /registry/events/default/pi3.15c653df9a6393f8, revision=332, delete=false
DEBU[2019-09-20T23:05:39.352474265-04:00] TRIGGERED /registry/events/default/pi3.15c653df9d66042c, revision=334, delete=false
DEBU[2019-09-20T23:05:39.352520463-04:00] TRIGGERED /registry/events/default/pi3.15c653dfa9e83d71, revision=335, delete=false
DEBU[2019-09-20T23:05:39.352564317-04:00] TRIGGERED /registry/events/default/pi3.15c653dfa9e6b7d6, revision=336, delete=false
DEBU[2019-09-20T23:05:39.352607650-04:00] TRIGGERED /registry/events/default/pi3.15c653dfa9e7d40e, revision=337, delete=false
DEBU[2019-09-20T23:05:39.352654004-04:00] TRIGGERED /registry/events/default/pi3.15c653dffa36c34f, revision=338, delete=false
DEBU[2019-09-20T23:05:39.352717701-04:00] TRIGGERED /registry/events/default/pi3.15c653dff74ffe98, revision=340, delete=false
DEBU[2019-09-20T23:05:39.352766087-04:00] TRIGGERED /registry/events/default/pi3.15c653e01010dc04, revision=341, delete=false
DEBU[2019-09-20T23:05:39.352831972-04:00] TRIGGERED /registry/events/default/pi3.15c653e8072a0cf7, revision=348, delete=false
DEBU[2019-09-20T23:05:39.352878482-04:00] TRIGGERED /registry/events/default/pi3.15c653e80c297d19, revision=349, delete=false
DEBU[2019-09-20T23:05:39.352922388-04:00] TRIGGERED /registry/events/default/pi3.15c653e827e66adf, revision=351, delete=false
DEBU[2019-09-20T23:05:39.352965512-04:00] TRIGGERED /registry/events/default/pi3.15c653e827e7a463, revision=353, delete=false
DEBU[2019-09-20T23:05:39.353011918-04:00] TRIGGERED /registry/events/default/pi3.15c653e8852e2ed3, revision=354, delete=false
DEBU[2019-09-20T23:05:39.353059158-04:00] TRIGGERED /registry/events/default/pi3.15c653e827e86c95, revision=355, delete=false
DEBU[2019-09-20T23:05:39.353102282-04:00] TRIGGERED /registry/events/default/pi3.15c653f0880ae1e1, revision=360, delete=false
DEBU[2019-09-20T23:05:39.353146240-04:00] TRIGGERED /registry/events/default/pi3.15c653f089515c09, revision=362, delete=false
DEBU[2019-09-20T23:05:39.353188636-04:00] TRIGGERED /registry/events/default/pi3.15c653f09e3b367c, revision=363, delete=false
DEBU[2019-09-20T23:05:39.353231500-04:00] TRIGGERED /registry/events/default/pi3.15c653f10b1098a9, revision=364, delete=false
DEBU[2019-09-20T23:05:39.353275510-04:00] TRIGGERED /registry/events/default/pi3.15c653f09e3c1e9e, revision=365, delete=false
DEBU[2019-09-20T23:05:39.353318843-04:00] TRIGGERED /registry/events/default/pi3.15c653f9e08569dd, revision=375, delete=false
DEBU[2019-09-20T23:05:39.353362385-04:00] TRIGGERED /registry/events/default/pi3.15c653f9e3bcddc3, revision=376, delete=false
DEBU[2019-09-20T23:05:39.353405874-04:00] TRIGGERED /registry/events/default/pi3.15c653fa03d2083c, revision=378, delete=false
DEBU[2019-09-20T23:05:39.353448999-04:00] TRIGGERED /registry/events/default/pi3.15c653fa03d30587, revision=379, delete=false
DEBU[2019-09-20T23:05:39.353491915-04:00] TRIGGERED /registry/events/default/pi3.15c653fa5e31225f, revision=380, delete=false
DEBU[2019-09-20T23:05:39.353535092-04:00] TRIGGERED /registry/events/default/pi3.15c653fa03d3c38d, revision=382, delete=false
DEBU[2019-09-20T23:05:39.353578113-04:00] TRIGGERED /registry/events/default/pi3.15c653fa5e64dfbb, revision=383, delete=false
DEBU[2019-09-20T23:05:39.353620821-04:00] TRIGGERED /registry/events/default/pi3.15c6540363c51c98, revision=392, delete=false
DEBU[2019-09-20T23:05:39.353663789-04:00] TRIGGERED /registry/events/default/pi3.15c65403691fde79, revision=394, delete=false
DEBU[2019-09-20T23:05:39.353706549-04:00] TRIGGERED /registry/events/default/pi3.15c65403a21a902e, revision=395, delete=false
DEBU[2019-09-20T23:05:39.353770455-04:00] TRIGGERED /registry/events/default/pi3.15c65403a21bc670, revision=396, delete=false
DEBU[2019-09-20T23:05:39.353818267-04:00] TRIGGERED /registry/events/default/pi3.15c65404007b3e21, revision=397, delete=false
DEBU[2019-09-20T23:05:39.353864621-04:00] TRIGGERED /registry/events/default/pi3.15c65403a21cb823, revision=399, delete=false
DEBU[2019-09-20T23:05:39.353908631-04:00] TRIGGERED /registry/events/default/pi3.15c6540d8c9ab9a7, revision=404, delete=false
DEBU[2019-09-20T23:05:39.353951392-04:00] TRIGGERED /registry/events/default/pi3.15c6540d8f3558e3, revision=406, delete=false
DEBU[2019-09-20T23:05:39.354001704-04:00] TRIGGERED /registry/events/default/pi3.15c6540da06a9230, revision=407, delete=false
DEBU[2019-09-20T23:05:39.354044933-04:00] TRIGGERED /registry/events/default/pi3.15c6540da06b2e3c, revision=408, delete=false
DEBU[2019-09-20T23:05:39.354089880-04:00] TRIGGERED /registry/events/default/pi3.15c6540da06b8eab, revision=409, delete=false
DEBU[2019-09-20T23:05:39.354133005-04:00] TRIGGERED /registry/events/default/pi3.15c6540dd370df0c, revision=410, delete=false
DEBU[2019-09-20T23:05:39.354176963-04:00] TRIGGERED /registry/events/default/pi3.15c6540de7b5d067, revision=412, delete=false
DEBU[2019-09-20T23:05:39.354220192-04:00] TRIGGERED /registry/events/default/pi3.15c65416c1ddee74, revision=421, delete=false
DEBU[2019-09-20T23:05:39.354263733-04:00] TRIGGERED /registry/events/default/pi3.15c65416c9d856cd, revision=422, delete=false
DEBU[2019-09-20T23:05:39.354306962-04:00] TRIGGERED /registry/events/default/pi3.15c65416cec5cdf2, revision=424, delete=false
DEBU[2019-09-20T23:05:39.354350087-04:00] TRIGGERED /registry/events/default/pi3.15c65416cec70127, revision=425, delete=false
DEBU[2019-09-20T23:05:39.354392899-04:00] TRIGGERED /registry/events/default/pi3.15c65416cec77727, revision=427, delete=false
DEBU[2019-09-20T23:05:39.354436493-04:00] TRIGGERED /registry/events/default/pi3.15c654171d26d035, revision=428, delete=false
DEBU[2019-09-20T23:05:39.354502013-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v1.helm.cattle.io, revision=431, delete=false
DEBU[2019-09-20T23:05:39.354550294-04:00] TRIGGERED /registry/apiregistration.k8s.io/apiservices/v1.k3s.cattle.io, revision=434, delete=false
DEBU[2019-09-20T23:05:39.354595346-04:00] TRIGGERED /registry/masterleases/192.168.0.5, revision=436, delete=false
DEBU[2019-09-20T23:05:39.354649200-04:00] TRIGGERED /registry/events/default/pi3.15c654281e2579c5, revision=437, delete=false
DEBU[2019-09-20T23:05:39.354694252-04:00] TRIGGERED /registry/events/default/pi3.15c6542821e2a49f, revision=438, delete=false
DEBU[2019-09-20T23:05:39.354785449-04:00] TRIGGERED /registry/leases/kube-node-lease/pi3, revision=440, delete=false
DEBU[2019-09-20T23:05:39.354832324-04:00] TRIGGERED /registry/events/default/pi3.15c654283249bf29, revision=441, delete=false
DEBU[2019-09-20T23:05:39.354878677-04:00] TRIGGERED /registry/events/default/pi3.15c65428324adb95, revision=442, delete=false
DEBU[2019-09-20T23:05:39.354922375-04:00] TRIGGERED /registry/events/default/pi3.15c65428324b8542, revision=443, delete=false
DEBU[2019-09-20T23:05:39.354972010-04:00] TRIGGERED /registry/events/default/pi3.15c654289f83cad5, revision=444, delete=false
DEBU[2019-09-20T23:05:39.355017531-04:00] TRIGGERED /registry/minions/pi3, revision=445, delete=false
DEBU[2019-09-20T23:05:39.355060760-04:00] TRIGGERED /registry/events/default/pi3.15c65428a11e8164, revision=446, delete=false
DEBU[2019-09-20T23:05:39.355104666-04:00] TRIGGERED /registry/events/default/pi3.15c65428a7f0848c, revision=447, delete=false
DEBU[2019-09-20T23:05:39.355148155-04:00] TRIGGERED /registry/masterleases/192.168.0.5, revision=448, delete=false
DEBU[2019-09-20T23:05:39.355190811-04:00] TRIGGERED /registry/events/kube-system/helm-install-traefik-qkh25.15c654295d32a0ac, revision=449, delete=false
DEBU[2019-09-20T23:05:39.355235029-04:00] TRIGGERED /registry/events/kube-system/coredns-66f496764-7p5zb.15c6542966701a30, revision=450, delete=false
DEBU[2019-09-20T23:05:39.355279144-04:00] TRIGGERED /registry/leases/kube-node-lease/pi3, revision=451, delete=false
DEBU[2019-09-20T23:05:39.355322268-04:00] TRIGGERED /registry/masterleases/192.168.0.5, revision=452, delete=true
DEBU[2019-09-20T23:05:39.355364977-04:00] TRIGGERED /registry/masterleases/192.168.0.5, revision=453, delete=false
DEBU[2019-09-20T23:05:39.355408414-04:00] TRIGGERED /registry/masterleases/192.168.0.5, revision=454, delete=true
DEBU[2019-09-20T23:05:39.355451851-04:00] TRIGGERED /registry/masterleases/192.168.0.5, revision=455, delete=false
DEBU[2019-09-20T23:05:39.355494559-04:00] TRIGGERED /registry/events/default/pi3.15c654dbfebbb4e5, revision=456, delete=false
DEBU[2019-09-20T23:05:39.355537684-04:00] TRIGGERED /registry/events/default/pi3.15c654dc028b9839, revision=457, delete=false
DEBU[2019-09-20T23:05:39.355580600-04:00] TRIGGERED /registry/leases/kube-node-lease/pi3, revision=458, delete=false
DEBU[2019-09-20T23:05:39.355623048-04:00] TRIGGERED /registry/events/default/pi3.15c654dc14753363, revision=459, delete=false
DEBU[2019-09-20T23:05:39.355670495-04:00] TRIGGERED /registry/events/default/pi3.15c654dc14763bdf, revision=460, delete=false
DEBU[2019-09-20T23:05:39.355715235-04:00] TRIGGERED /registry/events/default/pi3.15c654dc7927a1a5, revision=461, delete=false
DEBU[2019-09-20T23:05:39.355766380-04:00] TRIGGERED /registry/events/default/pi3.15c654dc1477098e, revision=462, delete=false
DEBU[2019-09-20T23:05:39.355811328-04:00] TRIGGERED /registry/minions/pi3, revision=463, delete=false
DEBU[2019-09-20T23:05:39.355853776-04:00] TRIGGERED /registry/events/default/pi3.15c654dc7e2a94ad, revision=464, delete=false
DEBU[2019-09-20T23:05:39.355897265-04:00] TRIGGERED /registry/events/default/pi3.15c654dc8ef448b0, revision=465, delete=false
DEBU[2019-09-20T23:05:39.355938983-04:00] TRIGGERED /registry/masterleases/192.168.0.5, revision=466, delete=true
DEBU[2019-09-20T23:05:39.355981796-04:00] TRIGGERED /registry/masterleases/192.168.0.5, revision=467, delete=false
DEBU[2019-09-20T23:05:39.356024139-04:00] TRIGGERED /registry/masterleases/192.168.0.5, revision=468, delete=true
DEBU[2019-09-20T23:05:39.356067680-04:00] TRIGGERED /registry/masterleases/192.168.0.5, revision=469, delete=false
DEBU[2019-09-20T23:05:39.356111847-04:00] TRIGGERED /registry/events/default/pi3.15c6551f1f110b52, revision=470, delete=false
DEBU[2019-09-20T23:05:39.356154451-04:00] TRIGGERED /registry/events/default/pi3.15c6551f21a72eed, revision=471, delete=false
DEBU[2019-09-20T23:05:39.356198044-04:00] TRIGGERED /registry/leases/kube-node-lease/pi3, revision=472, delete=false
DEBU[2019-09-20T23:05:39.356242419-04:00] TRIGGERED /registry/events/default/pi3.15c6551f374c0be8, revision=473, delete=false
DEBU[2019-09-20T23:05:39.356285335-04:00] TRIGGERED /registry/events/default/pi3.15c6551f374d7d2b, revision=474, delete=false
DEBU[2019-09-20T23:05:39.356328616-04:00] TRIGGERED /registry/minions/pi3, revision=475, delete=false
DEBU[2019-09-20T23:05:39.356371220-04:00] TRIGGERED /registry/events/default/pi3.15c6551f374e68c3, revision=476, delete=false
DEBU[2019-09-20T23:05:39.356414762-04:00] TRIGGERED /registry/events/default/pi3.15c6551fc411b514, revision=477, delete=false
I0920 23:05:39.385456    6964 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
I0920 23:05:39.385585    6964 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
E0920 23:05:39.394116    6964 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
E0920 23:05:39.396985    6964 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
E0920 23:05:39.397362    6964 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
E0920 23:05:39.397565    6964 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
E0920 23:05:39.397760    6964 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
E0920 23:05:39.397920    6964 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
E0920 23:05:39.398061    6964 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
E0920 23:05:39.398174    6964 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
E0920 23:05:39.398478    6964 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
E0920 23:05:39.398794    6964 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
E0920 23:05:39.399163    6964 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
E0920 23:05:39.399353    6964 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
I0920 23:05:39.399490    6964 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
I0920 23:05:39.399555    6964 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
DEBU[2019-09-20T23:05:39.421381231-04:00] COUNT /registry/apiextensions.k8s.io/customresourcedefinitions/ => rev=477, count=3, err=<nil>
DEBU[2019-09-20T23:05:39.421854769-04:00] LIST /registry/apiextensions.k8s.io/customresourcedefinitions/, start=/registry/apiextensions.k8s.io/customresourcedefinitions/, limit=10001, rev=0 => rev=477, kvs=3, err=<nil>
DEBU[2019-09-20T23:05:39.437953575-04:00] WATCH START id=1, key=/registry/apiextensions.k8s.io/customresourcedefinitions/, revision=478
DEBU[2019-09-20T23:05:39.438579613-04:00] WATCH /registry/apiextensions.k8s.io/customresourcedefinitions/, revision=478
DEBU[2019-09-20T23:05:39.440494236-04:00] WATCH LIST key=/registry/apiextensions.k8s.io/customresourcedefinitions/ rev=477 => rev=0 kvs=0
I0920 23:05:39.482191    6964 master.go:233] Using reconciler: lease
DEBU[2019-09-20T23:05:39.503276396-04:00] LIST /registry/podtemplates/, start=/registry/podtemplates/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:39.512035456-04:00] COUNT /registry/events/ => rev=477, count=85, err=<nil>
DEBU[2019-09-20T23:05:39.513594873-04:00] COUNT /registry/podtemplates/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:39.516228711-04:00] WATCH START id=2, key=/registry/podtemplates/, revision=478
DEBU[2019-09-20T23:05:39.517458963-04:00] WATCH /registry/podtemplates/, revision=478
DEBU[2019-09-20T23:05:39.519323379-04:00] WATCH LIST key=/registry/podtemplates/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:39.521512219-04:00] LIST /registry/limitranges/, start=/registry/limitranges/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:39.524115432-04:00] COUNT /registry/limitranges/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:39.529743418-04:00] WATCH START id=3, key=/registry/limitranges/, revision=478
DEBU[2019-09-20T23:05:39.531482261-04:00] WATCH /registry/limitranges/, revision=478
DEBU[2019-09-20T23:05:39.533043449-04:00] LIST /registry/resourcequotas/, start=/registry/resourcequotas/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:39.537094518-04:00] COUNT /registry/resourcequotas/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:39.538230709-04:00] WATCH START id=4, key=/registry/resourcequotas/, revision=478
DEBU[2019-09-20T23:05:39.539262942-04:00] WATCH /registry/resourcequotas/, revision=478
DEBU[2019-09-20T23:05:39.541186523-04:00] WATCH LIST key=/registry/resourcequotas/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:39.539897417-04:00] WATCH LIST key=/registry/limitranges/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:39.545659985-04:00] COUNT /registry/secrets/ => rev=477, count=31, err=<nil>
DEBU[2019-09-20T23:05:39.545669829-04:00] LIST /registry/secrets/, start=/registry/secrets/, limit=10001, rev=0 => rev=477, kvs=31, err=<nil>
DEBU[2019-09-20T23:05:39.551046201-04:00] WATCH START id=5, key=/registry/secrets/, revision=478
DEBU[2019-09-20T23:05:39.551189169-04:00] WATCH /registry/secrets/, revision=478
DEBU[2019-09-20T23:05:39.552622650-04:00] WATCH LIST key=/registry/secrets/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:39.604573054-04:00] LIST /registry/persistentvolumes/, start=/registry/persistentvolumes/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:39.606836478-04:00] WATCH START id=6, key=/registry/persistentvolumes/, revision=478
DEBU[2019-09-20T23:05:39.606992675-04:00] WATCH /registry/persistentvolumes/, revision=478
DEBU[2019-09-20T23:05:39.609984739-04:00] LIST /registry/persistentvolumeclaims/, start=/registry/persistentvolumeclaims/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:39.611104680-04:00] COUNT /registry/persistentvolumeclaims/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:39.611828009-04:00] WATCH LIST key=/registry/persistentvolumes/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:39.613196230-04:00] WATCH START id=7, key=/registry/persistentvolumeclaims/, revision=478
DEBU[2019-09-20T23:05:39.613364093-04:00] WATCH /registry/persistentvolumeclaims/, revision=478
DEBU[2019-09-20T23:05:39.614833511-04:00] WATCH LIST key=/registry/persistentvolumeclaims/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:39.618159219-04:00] COUNT /registry/configmaps/ => rev=477, count=2, err=<nil>
DEBU[2019-09-20T23:05:39.618298750-04:00] LIST /registry/configmaps/, start=/registry/configmaps/, limit=10001, rev=0 => rev=477, kvs=2, err=<nil>
DEBU[2019-09-20T23:05:39.619912229-04:00] COUNT /registry/persistentvolumes/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:39.622673827-04:00] COUNT /registry/namespaces/ => rev=477, count=4, err=<nil>
DEBU[2019-09-20T23:05:39.622762420-04:00] WATCH START id=8, key=/registry/configmaps/, revision=478
DEBU[2019-09-20T23:05:39.622994502-04:00] WATCH /registry/configmaps/, revision=478
DEBU[2019-09-20T23:05:39.625280269-04:00] WATCH LIST key=/registry/configmaps/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:39.627188955-04:00] LIST /registry/namespaces/, start=/registry/namespaces/, limit=10001, rev=0 => rev=477, kvs=4, err=<nil>
DEBU[2019-09-20T23:05:39.627775357-04:00] LIST /registry/services/endpoints/, start=/registry/services/endpoints/, limit=10001, rev=0 => rev=477, kvs=2, err=<nil>
DEBU[2019-09-20T23:05:39.629189827-04:00] COUNT /registry/services/endpoints/ => rev=477, count=2, err=<nil>
DEBU[2019-09-20T23:05:39.632987720-04:00] LIST /registry/minions/, start=/registry/minions/, limit=10001, rev=0 => rev=477, kvs=1, err=<nil>
DEBU[2019-09-20T23:05:39.634509013-04:00] WATCH START id=9, key=/registry/services/endpoints/, revision=478
DEBU[2019-09-20T23:05:39.635235467-04:00] COUNT /registry/minions/ => rev=477, count=1, err=<nil>
DEBU[2019-09-20T23:05:39.635315258-04:00] WATCH /registry/services/endpoints/, revision=478
DEBU[2019-09-20T23:05:39.636832801-04:00] WATCH LIST key=/registry/services/endpoints/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:39.638589248-04:00] WATCH START id=10, key=/registry/namespaces/, revision=478
DEBU[2019-09-20T23:05:39.638763570-04:00] WATCH /registry/namespaces/, revision=478
DEBU[2019-09-20T23:05:39.640419393-04:00] WATCH LIST key=/registry/namespaces/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:39.638617529-04:00] WATCH START id=11, key=/registry/minions/, revision=478
DEBU[2019-09-20T23:05:39.640770849-04:00] WATCH /registry/minions/, revision=478
DEBU[2019-09-20T23:05:39.642580733-04:00] LIST /registry/pods/, start=/registry/pods/, limit=10001, rev=0 => rev=477, kvs=2, err=<nil>
DEBU[2019-09-20T23:05:39.644816761-04:00] COUNT /registry/pods/ => rev=477, count=2, err=<nil>
DEBU[2019-09-20T23:05:39.645921025-04:00] WATCH START id=12, key=/registry/pods/, revision=478
DEBU[2019-09-20T23:05:39.646631958-04:00] WATCH /registry/pods/, revision=478
DEBU[2019-09-20T23:05:39.648158199-04:00] WATCH LIST key=/registry/minions/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:39.649800324-04:00] WATCH LIST key=/registry/pods/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:39.650032353-04:00] COUNT /registry/serviceaccounts/ => rev=477, count=31, err=<nil>
DEBU[2019-09-20T23:05:39.653592540-04:00] COUNT /registry/services/specs/ => rev=477, count=2, err=<nil>
DEBU[2019-09-20T23:05:39.655531642-04:00] LIST /registry/services/specs/, start=/registry/services/specs/, limit=10001, rev=0 => rev=477, kvs=2, err=<nil>
DEBU[2019-09-20T23:05:39.659137140-04:00] WATCH START id=13, key=/registry/services/specs/, revision=478
DEBU[2019-09-20T23:05:39.659323858-04:00] WATCH /registry/services/specs/, revision=478
DEBU[2019-09-20T23:05:39.660937650-04:00] LIST /registry/serviceaccounts/, start=/registry/serviceaccounts/, limit=10001, rev=0 => rev=477, kvs=31, err=<nil>
DEBU[2019-09-20T23:05:39.661089055-04:00] WATCH LIST key=/registry/services/specs/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:39.663167063-04:00] LIST /registry/controllers/, start=/registry/controllers/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:39.664456326-04:00] COUNT /registry/controllers/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:39.665657256-04:00] WATCH START id=14, key=/registry/controllers/, revision=478
DEBU[2019-09-20T23:05:39.665808765-04:00] WATCH /registry/controllers/, revision=478
DEBU[2019-09-20T23:05:39.667778232-04:00] WATCH LIST key=/registry/controllers/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:39.668834840-04:00] WATCH START id=15, key=/registry/serviceaccounts/, revision=478
DEBU[2019-09-20T23:05:39.669200619-04:00] WATCH /registry/serviceaccounts/, revision=478
DEBU[2019-09-20T23:05:39.670685297-04:00] WATCH LIST key=/registry/serviceaccounts/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.060626245-04:00] COUNT /registry/horizontalpodautoscalers/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.067849586-04:00] LIST /registry/horizontalpodautoscalers/, start=/registry/horizontalpodautoscalers/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.070233425-04:00] LIST /registry/horizontalpodautoscalers/, start=/registry/horizontalpodautoscalers/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.072284923-04:00] COUNT /registry/horizontalpodautoscalers/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.073737413-04:00] WATCH START id=16, key=/registry/horizontalpodautoscalers/, revision=478
DEBU[2019-09-20T23:05:40.074002724-04:00] WATCH /registry/horizontalpodautoscalers/, revision=478
DEBU[2019-09-20T23:05:40.068770674-04:00] COUNT /registry/horizontalpodautoscalers/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.076826821-04:00] COUNT /registry/jobs/ => rev=477, count=1, err=<nil>
DEBU[2019-09-20T23:05:40.077678535-04:00] WATCH LIST key=/registry/horizontalpodautoscalers/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.079683470-04:00] LIST /registry/horizontalpodautoscalers/, start=/registry/horizontalpodautoscalers/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.081143878-04:00] WATCH START id=17, key=/registry/horizontalpodautoscalers/, revision=478
DEBU[2019-09-20T23:05:40.082190017-04:00] WATCH /registry/horizontalpodautoscalers/, revision=478
DEBU[2019-09-20T23:05:40.082269652-04:00] WATCH START id=18, key=/registry/horizontalpodautoscalers/, revision=478
DEBU[2019-09-20T23:05:40.083547040-04:00] WATCH /registry/horizontalpodautoscalers/, revision=478
DEBU[2019-09-20T23:05:40.084781719-04:00] WATCH LIST key=/registry/horizontalpodautoscalers/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.084781719-04:00] WATCH LIST key=/registry/horizontalpodautoscalers/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.082319808-04:00] LIST /registry/jobs/, start=/registry/jobs/, limit=10001, rev=0 => rev=477, kvs=1, err=<nil>
DEBU[2019-09-20T23:05:40.088328885-04:00] WATCH START id=19, key=/registry/jobs/, revision=478
DEBU[2019-09-20T23:05:40.089648408-04:00] WATCH /registry/jobs/, revision=478
DEBU[2019-09-20T23:05:40.090754182-04:00] COUNT /registry/cronjobs/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.093422863-04:00] LIST /registry/cronjobs/, start=/registry/cronjobs/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.095291393-04:00] WATCH LIST key=/registry/jobs/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.097999918-04:00] COUNT /registry/certificatesigningrequests/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.096085815-04:00] WATCH START id=20, key=/registry/cronjobs/, revision=478
DEBU[2019-09-20T23:05:40.101561250-04:00] COUNT /registry/leases/ => rev=477, count=1, err=<nil>
DEBU[2019-09-20T23:05:40.102198277-04:00] WATCH /registry/cronjobs/, revision=478
DEBU[2019-09-20T23:05:40.102909731-04:00] LIST /registry/certificatesigningrequests/, start=/registry/certificatesigningrequests/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.106718301-04:00] COUNT /registry/leases/ => rev=477, count=1, err=<nil>
DEBU[2019-09-20T23:05:40.108268343-04:00] LIST /registry/leases/, start=/registry/leases/, limit=10001, rev=0 => rev=477, kvs=1, err=<nil>
DEBU[2019-09-20T23:05:40.108560373-04:00] WATCH START id=21, key=/registry/certificatesigningrequests/, revision=478
DEBU[2019-09-20T23:05:40.108753497-04:00] WATCH /registry/certificatesigningrequests/, revision=478
DEBU[2019-09-20T23:05:40.111815509-04:00] WATCH LIST key=/registry/cronjobs/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.112375974-04:00] LIST /registry/leases/, start=/registry/leases/, limit=10001, rev=0 => rev=477, kvs=1, err=<nil>
DEBU[2019-09-20T23:05:40.115785901-04:00] WATCH START id=22, key=/registry/leases/, revision=478
DEBU[2019-09-20T23:05:40.115986056-04:00] WATCH /registry/leases/, revision=478
DEBU[2019-09-20T23:05:40.116678135-04:00] LIST /registry/controllers/, start=/registry/controllers/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.122482369-04:00] WATCH START id=23, key=/registry/leases/, revision=478
DEBU[2019-09-20T23:05:40.123778924-04:00] WATCH /registry/leases/, revision=478
DEBU[2019-09-20T23:05:40.125665735-04:00] WATCH LIST key=/registry/leases/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.131877363-04:00] COUNT /registry/controllers/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.132489182-04:00] WATCH LIST key=/registry/leases/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.134194379-04:00] WATCH LIST key=/registry/certificatesigningrequests/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.153682383-04:00] COUNT /registry/daemonsets/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.157698399-04:00] LIST /registry/daemonsets/, start=/registry/daemonsets/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.159855677-04:00] WATCH START id=24, key=/registry/controllers/, revision=478
DEBU[2019-09-20T23:05:40.160819057-04:00] WATCH /registry/controllers/, revision=478
DEBU[2019-09-20T23:05:40.162084361-04:00] COUNT /registry/deployments/ => rev=477, count=1, err=<nil>
DEBU[2019-09-20T23:05:40.162661233-04:00] WATCH LIST key=/registry/controllers/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.165534340-04:00] LIST /registry/deployments/, start=/registry/deployments/, limit=10001, rev=0 => rev=477, kvs=1, err=<nil>
DEBU[2019-09-20T23:05:40.166909331-04:00] WATCH START id=25, key=/registry/daemonsets/, revision=478
DEBU[2019-09-20T23:05:40.167247506-04:00] WATCH /registry/daemonsets/, revision=478
DEBU[2019-09-20T23:05:40.169022183-04:00] WATCH LIST key=/registry/daemonsets/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.172385599-04:00] WATCH START id=26, key=/registry/deployments/, revision=478
DEBU[2019-09-20T23:05:40.172568150-04:00] WATCH /registry/deployments/, revision=478
DEBU[2019-09-20T23:05:40.174156786-04:00] WATCH LIST key=/registry/deployments/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.175194331-04:00] COUNT /registry/ingress/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.177114476-04:00] LIST /registry/ingress/, start=/registry/ingress/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.179197796-04:00] WATCH START id=27, key=/registry/ingress/, revision=478
DEBU[2019-09-20T23:05:40.179400972-04:00] WATCH /registry/ingress/, revision=478
DEBU[2019-09-20T23:05:40.181020597-04:00] WATCH LIST key=/registry/ingress/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.184809480-04:00] LIST /registry/replicasets/, start=/registry/replicasets/, limit=10001, rev=0 => rev=477, kvs=1, err=<nil>
DEBU[2019-09-20T23:05:40.186441553-04:00] LIST /registry/podsecuritypolicy/, start=/registry/podsecuritypolicy/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.189962729-04:00] COUNT /registry/replicasets/ => rev=477, count=1, err=<nil>
DEBU[2019-09-20T23:05:40.191593760-04:00] WATCH START id=28, key=/registry/replicasets/, revision=478
DEBU[2019-09-20T23:05:40.192129694-04:00] WATCH /registry/replicasets/, revision=478
DEBU[2019-09-20T23:05:40.193536196-04:00] COUNT /registry/networkpolicies/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.193656404-04:00] WATCH LIST key=/registry/replicasets/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.195698370-04:00] COUNT /registry/podsecuritypolicy/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.198206219-04:00] COUNT /registry/networkpolicies/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.199926989-04:00] COUNT /registry/ingress/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.202061143-04:00] LIST /registry/networkpolicies/, start=/registry/networkpolicies/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.202721868-04:00] WATCH START id=29, key=/registry/podsecuritypolicy/, revision=478
DEBU[2019-09-20T23:05:40.202892804-04:00] WATCH /registry/podsecuritypolicy/, revision=478
DEBU[2019-09-20T23:05:40.205906379-04:00] COUNT /registry/runtimeclasses/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.206751061-04:00] LIST /registry/networkpolicies/, start=/registry/networkpolicies/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.206819811-04:00] WATCH LIST key=/registry/podsecuritypolicy/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.208029074-04:00] LIST /registry/ingress/, start=/registry/ingress/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.210918379-04:00] WATCH START id=30, key=/registry/networkpolicies/, revision=478
DEBU[2019-09-20T23:05:40.211108065-04:00] WATCH /registry/networkpolicies/, revision=478
DEBU[2019-09-20T23:05:40.212013268-04:00] LIST /registry/runtimeclasses/, start=/registry/runtimeclasses/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.212562015-04:00] WATCH LIST key=/registry/networkpolicies/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.210918327-04:00] WATCH START id=31, key=/registry/ingress/, revision=478
DEBU[2019-09-20T23:05:40.212960762-04:00] WATCH /registry/ingress/, revision=478
DEBU[2019-09-20T23:05:40.213666435-04:00] COUNT /registry/poddisruptionbudgets/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.214331483-04:00] WATCH START id=32, key=/registry/networkpolicies/, revision=478
DEBU[2019-09-20T23:05:40.214511534-04:00] WATCH /registry/networkpolicies/, revision=478
DEBU[2019-09-20T23:05:40.214786219-04:00] WATCH LIST key=/registry/ingress/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.214888354-04:00] WATCH START id=33, key=/registry/runtimeclasses/, revision=478
DEBU[2019-09-20T23:05:40.215026270-04:00] WATCH /registry/runtimeclasses/, revision=478
DEBU[2019-09-20T23:05:40.216208867-04:00] WATCH LIST key=/registry/networkpolicies/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.216946206-04:00] WATCH LIST key=/registry/runtimeclasses/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.220997066-04:00] COUNT /registry/podsecuritypolicy/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.221976904-04:00] LIST /registry/podsecuritypolicy/, start=/registry/podsecuritypolicy/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.223441738-04:00] LIST /registry/poddisruptionbudgets/, start=/registry/poddisruptionbudgets/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.224044443-04:00] WATCH START id=34, key=/registry/podsecuritypolicy/, revision=478
DEBU[2019-09-20T23:05:40.224332618-04:00] WATCH /registry/podsecuritypolicy/, revision=478
DEBU[2019-09-20T23:05:40.225229644-04:00] COUNT /registry/roles/ => rev=477, count=7, err=<nil>
DEBU[2019-09-20T23:05:40.225472559-04:00] WATCH START id=35, key=/registry/poddisruptionbudgets/, revision=478
DEBU[2019-09-20T23:05:40.225600891-04:00] WATCH /registry/poddisruptionbudgets/, revision=478
DEBU[2019-09-20T23:05:40.227558223-04:00] WATCH LIST key=/registry/poddisruptionbudgets/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.228627904-04:00] WATCH LIST key=/registry/podsecuritypolicy/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.230267425-04:00] LIST /registry/roles/, start=/registry/roles/, limit=10001, rev=0 => rev=477, kvs=7, err=<nil>
DEBU[2019-09-20T23:05:40.233412353-04:00] COUNT /registry/rolebindings/ => rev=477, count=7, err=<nil>
DEBU[2019-09-20T23:05:40.233747559-04:00] LIST /registry/rolebindings/, start=/registry/rolebindings/, limit=10001, rev=0 => rev=477, kvs=7, err=<nil>
DEBU[2019-09-20T23:05:40.234101567-04:00] WATCH START id=36, key=/registry/roles/, revision=478
DEBU[2019-09-20T23:05:40.234376982-04:00] WATCH /registry/roles/, revision=478
DEBU[2019-09-20T23:05:40.236280408-04:00] WATCH LIST key=/registry/roles/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.237731232-04:00] WATCH START id=37, key=/registry/rolebindings/, revision=478
DEBU[2019-09-20T23:05:40.238072428-04:00] WATCH /registry/rolebindings/, revision=478
DEBU[2019-09-20T23:05:40.239898823-04:00] WATCH LIST key=/registry/rolebindings/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.244238275-04:00] COUNT /registry/clusterrolebindings/ => rev=477, count=39, err=<nil>
DEBU[2019-09-20T23:05:40.245895817-04:00] LIST /registry/clusterroles/, start=/registry/clusterroles/, limit=10001, rev=0 => rev=477, kvs=54, err=<nil>
DEBU[2019-09-20T23:05:40.249120849-04:00] COUNT /registry/roles/ => rev=477, count=7, err=<nil>
DEBU[2019-09-20T23:05:40.250191154-04:00] COUNT /registry/clusterroles/ => rev=477, count=54, err=<nil>
DEBU[2019-09-20T23:05:40.252644941-04:00] LIST /registry/clusterrolebindings/, start=/registry/clusterrolebindings/, limit=10001, rev=0 => rev=477, kvs=39, err=<nil>
DEBU[2019-09-20T23:05:40.255805338-04:00] LIST /registry/roles/, start=/registry/roles/, limit=10001, rev=0 => rev=477, kvs=7, err=<nil>
DEBU[2019-09-20T23:05:40.260851869-04:00] COUNT /registry/rolebindings/ => rev=477, count=7, err=<nil>
DEBU[2019-09-20T23:05:40.261419938-04:00] WATCH START id=38, key=/registry/clusterrolebindings/, revision=478
DEBU[2019-09-20T23:05:40.261809519-04:00] WATCH /registry/clusterrolebindings/, revision=478
DEBU[2019-09-20T23:05:40.263758413-04:00] WATCH LIST key=/registry/clusterrolebindings/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.261658270-04:00] COUNT /registry/clusterroles/ => rev=477, count=54, err=<nil>
DEBU[2019-09-20T23:05:40.264905021-04:00] WATCH START id=39, key=/registry/roles/, revision=478
DEBU[2019-09-20T23:05:40.265199550-04:00] WATCH /registry/roles/, revision=478
DEBU[2019-09-20T23:05:40.266799488-04:00] WATCH LIST key=/registry/roles/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.267299016-04:00] WATCH START id=40, key=/registry/clusterroles/, revision=478
DEBU[2019-09-20T23:05:40.267471776-04:00] WATCH /registry/clusterroles/, revision=478
DEBU[2019-09-20T23:05:40.264923666-04:00] LIST /registry/rolebindings/, start=/registry/rolebindings/, limit=10001, rev=0 => rev=477, kvs=7, err=<nil>
DEBU[2019-09-20T23:05:40.268859892-04:00] WATCH LIST key=/registry/clusterroles/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.270565819-04:00] LIST /registry/clusterroles/, start=/registry/clusterroles/, limit=10001, rev=0 => rev=477, kvs=54, err=<nil>
DEBU[2019-09-20T23:05:40.270898421-04:00] WATCH START id=41, key=/registry/rolebindings/, revision=478
DEBU[2019-09-20T23:05:40.271050347-04:00] WATCH /registry/rolebindings/, revision=478
DEBU[2019-09-20T23:05:40.273417988-04:00] WATCH LIST key=/registry/rolebindings/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.276876300-04:00] LIST /registry/clusterrolebindings/, start=/registry/clusterrolebindings/, limit=10001, rev=0 => rev=477, kvs=39, err=<nil>
DEBU[2019-09-20T23:05:40.278767069-04:00] WATCH START id=42, key=/registry/clusterroles/, revision=478
DEBU[2019-09-20T23:05:40.279110974-04:00] WATCH /registry/clusterroles/, revision=478
DEBU[2019-09-20T23:05:40.281029764-04:00] WATCH LIST key=/registry/clusterroles/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.282972199-04:00] COUNT /registry/clusterrolebindings/ => rev=477, count=39, err=<nil>
DEBU[2019-09-20T23:05:40.283026157-04:00] WATCH START id=43, key=/registry/clusterrolebindings/, revision=478
DEBU[2019-09-20T23:05:40.284126984-04:00] WATCH /registry/clusterrolebindings/, revision=478
DEBU[2019-09-20T23:05:40.285024478-04:00] COUNT /registry/priorityclasses/ => rev=477, count=2, err=<nil>
DEBU[2019-09-20T23:05:40.286782071-04:00] WATCH LIST key=/registry/clusterrolebindings/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.287627587-04:00] LIST /registry/priorityclasses/, start=/registry/priorityclasses/, limit=10001, rev=0 => rev=477, kvs=2, err=<nil>
DEBU[2019-09-20T23:05:40.289551221-04:00] WATCH START id=44, key=/registry/priorityclasses/, revision=478
DEBU[2019-09-20T23:05:40.289701012-04:00] WATCH /registry/priorityclasses/, revision=478
DEBU[2019-09-20T23:05:40.291004649-04:00] WATCH LIST key=/registry/priorityclasses/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.292010112-04:00] LIST /registry/priorityclasses/, start=/registry/priorityclasses/, limit=10001, rev=0 => rev=477, kvs=2, err=<nil>
DEBU[2019-09-20T23:05:40.293960568-04:00] WATCH START id=45, key=/registry/priorityclasses/, revision=478
DEBU[2019-09-20T23:05:40.294572908-04:00] WATCH /registry/priorityclasses/, revision=478
DEBU[2019-09-20T23:05:40.295328528-04:00] COUNT /registry/priorityclasses/ => rev=477, count=2, err=<nil>
DEBU[2019-09-20T23:05:40.297156798-04:00] WATCH LIST key=/registry/priorityclasses/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.301509688-04:00] COUNT /registry/storageclasses/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.301846613-04:00] LIST /registry/storageclasses/, start=/registry/storageclasses/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.303722539-04:00] WATCH START id=46, key=/registry/storageclasses/, revision=478
DEBU[2019-09-20T23:05:40.304304202-04:00] WATCH /registry/storageclasses/, revision=478
DEBU[2019-09-20T23:05:40.304587637-04:00] LIST /registry/volumeattachments/, start=/registry/volumeattachments/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.306184086-04:00] WATCH LIST key=/registry/storageclasses/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.306183929-04:00] COUNT /registry/volumeattachments/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.306986633-04:00] WATCH START id=47, key=/registry/volumeattachments/, revision=478
DEBU[2019-09-20T23:05:40.307125746-04:00] WATCH /registry/volumeattachments/, revision=478
DEBU[2019-09-20T23:05:40.308463446-04:00] WATCH LIST key=/registry/volumeattachments/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.315033718-04:00] COUNT /registry/csinodes/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.318450155-04:00] COUNT /registry/csidrivers/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.320874098-04:00] LIST /registry/csidrivers/, start=/registry/csidrivers/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.323117417-04:00] WATCH START id=48, key=/registry/csidrivers/, revision=478
DEBU[2019-09-20T23:05:40.323301166-04:00] WATCH /registry/csidrivers/, revision=478
DEBU[2019-09-20T23:05:40.324451888-04:00] LIST /registry/csinodes/, start=/registry/csinodes/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.324514492-04:00] COUNT /registry/storageclasses/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.327862961-04:00] COUNT /registry/volumeattachments/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.327863013-04:00] WATCH LIST key=/registry/csidrivers/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.328556498-04:00] WATCH START id=49, key=/registry/csinodes/, revision=478
DEBU[2019-09-20T23:05:40.328707591-04:00] WATCH /registry/csinodes/, revision=478
DEBU[2019-09-20T23:05:40.330658828-04:00] WATCH LIST key=/registry/csinodes/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.332090642-04:00] LIST /registry/volumeattachments/, start=/registry/volumeattachments/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.333646362-04:00] LIST /registry/storageclasses/, start=/registry/storageclasses/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.335347393-04:00] WATCH START id=50, key=/registry/volumeattachments/, revision=478
DEBU[2019-09-20T23:05:40.335803223-04:00] WATCH START id=51, key=/registry/storageclasses/, revision=478
DEBU[2019-09-20T23:05:40.335948170-04:00] WATCH /registry/storageclasses/, revision=478
DEBU[2019-09-20T23:05:40.336114471-04:00] COUNT /registry/deployments/ => rev=477, count=1, err=<nil>
DEBU[2019-09-20T23:05:40.336618270-04:00] WATCH /registry/volumeattachments/, revision=478
DEBU[2019-09-20T23:05:40.337502275-04:00] WATCH LIST key=/registry/storageclasses/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.340599443-04:00] LIST /registry/deployments/, start=/registry/deployments/, limit=10001, rev=0 => rev=477, kvs=1, err=<nil>
DEBU[2019-09-20T23:05:40.342140684-04:00] WATCH LIST key=/registry/volumeattachments/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.343675518-04:00] WATCH START id=52, key=/registry/deployments/, revision=478
DEBU[2019-09-20T23:05:40.343821663-04:00] WATCH /registry/deployments/, revision=478
DEBU[2019-09-20T23:05:40.345753422-04:00] WATCH LIST key=/registry/deployments/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.346010816-04:00] COUNT /registry/statefulsets/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.347300964-04:00] LIST /registry/statefulsets/, start=/registry/statefulsets/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.349860479-04:00] LIST /registry/controllerrevisions/, start=/registry/controllerrevisions/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.350160946-04:00] WATCH START id=53, key=/registry/statefulsets/, revision=478
DEBU[2019-09-20T23:05:40.350285529-04:00] WATCH /registry/statefulsets/, revision=478
DEBU[2019-09-20T23:05:40.351720155-04:00] WATCH LIST key=/registry/statefulsets/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.351879060-04:00] WATCH START id=54, key=/registry/controllerrevisions/, revision=478
DEBU[2019-09-20T23:05:40.351994372-04:00] WATCH /registry/controllerrevisions/, revision=478
DEBU[2019-09-20T23:05:40.352602910-04:00] COUNT /registry/controllerrevisions/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.353728580-04:00] WATCH LIST key=/registry/controllerrevisions/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.355797005-04:00] LIST /registry/deployments/, start=/registry/deployments/, limit=10001, rev=0 => rev=477, kvs=1, err=<nil>
DEBU[2019-09-20T23:05:40.357963554-04:00] COUNT /registry/deployments/ => rev=477, count=1, err=<nil>
DEBU[2019-09-20T23:05:40.359074276-04:00] WATCH START id=55, key=/registry/deployments/, revision=478
DEBU[2019-09-20T23:05:40.359291514-04:00] WATCH /registry/deployments/, revision=478
DEBU[2019-09-20T23:05:40.361357595-04:00] WATCH LIST key=/registry/deployments/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.362187538-04:00] COUNT /registry/statefulsets/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.366222148-04:00] COUNT /registry/daemonsets/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.366803967-04:00] LIST /registry/daemonsets/, start=/registry/daemonsets/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.369160723-04:00] WATCH START id=56, key=/registry/daemonsets/, revision=478
DEBU[2019-09-20T23:05:40.369862438-04:00] WATCH /registry/daemonsets/, revision=478
DEBU[2019-09-20T23:05:40.371506698-04:00] COUNT /registry/replicasets/ => rev=477, count=1, err=<nil>
DEBU[2019-09-20T23:05:40.371760499-04:00] LIST /registry/replicasets/, start=/registry/replicasets/, limit=10001, rev=0 => rev=477, kvs=1, err=<nil>
DEBU[2019-09-20T23:05:40.374035537-04:00] WATCH START id=57, key=/registry/replicasets/, revision=478
DEBU[2019-09-20T23:05:40.374200900-04:00] WATCH /registry/replicasets/, revision=478
DEBU[2019-09-20T23:05:40.375381986-04:00] WATCH LIST key=/registry/replicasets/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.375621152-04:00] COUNT /registry/controllerrevisions/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.378820194-04:00] LIST /registry/controllerrevisions/, start=/registry/controllerrevisions/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.380023103-04:00] WATCH LIST key=/registry/daemonsets/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.369928271-04:00] LIST /registry/statefulsets/, start=/registry/statefulsets/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.379502429-04:00] COUNT /registry/deployments/ => rev=477, count=1, err=<nil>
DEBU[2019-09-20T23:05:40.382934387-04:00] LIST /registry/deployments/, start=/registry/deployments/, limit=10001, rev=0 => rev=477, kvs=1, err=<nil>
DEBU[2019-09-20T23:05:40.383661987-04:00] WATCH START id=59, key=/registry/statefulsets/, revision=478
DEBU[2019-09-20T23:05:40.383858392-04:00] WATCH /registry/statefulsets/, revision=478
DEBU[2019-09-20T23:05:40.385008385-04:00] COUNT /registry/statefulsets/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.386420095-04:00] WATCH START id=60, key=/registry/deployments/, revision=478
DEBU[2019-09-20T23:05:40.386578948-04:00] WATCH /registry/deployments/, revision=478
DEBU[2019-09-20T23:05:40.387776857-04:00] WATCH LIST key=/registry/deployments/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.389566429-04:00] LIST /registry/statefulsets/, start=/registry/statefulsets/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.383662351-04:00] WATCH START id=58, key=/registry/controllerrevisions/, revision=478
DEBU[2019-09-20T23:05:40.390060593-04:00] WATCH /registry/controllerrevisions/, revision=478
DEBU[2019-09-20T23:05:40.391436417-04:00] WATCH LIST key=/registry/controllerrevisions/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.389572575-04:00] WATCH LIST key=/registry/statefulsets/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.391924071-04:00] WATCH START id=61, key=/registry/statefulsets/, revision=478
DEBU[2019-09-20T23:05:40.392102507-04:00] WATCH /registry/statefulsets/, revision=478
DEBU[2019-09-20T23:05:40.393507029-04:00] COUNT /registry/daemonsets/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.393665310-04:00] WATCH LIST key=/registry/statefulsets/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.399292670-04:00] COUNT /registry/replicasets/ => rev=477, count=1, err=<nil>
DEBU[2019-09-20T23:05:40.399580950-04:00] LIST /registry/replicasets/, start=/registry/replicasets/, limit=10001, rev=0 => rev=477, kvs=1, err=<nil>
DEBU[2019-09-20T23:05:40.401109899-04:00] LIST /registry/daemonsets/, start=/registry/daemonsets/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.403488321-04:00] COUNT /registry/controllerrevisions/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.404067432-04:00] WATCH START id=62, key=/registry/daemonsets/, revision=478
DEBU[2019-09-20T23:05:40.404227223-04:00] WATCH /registry/daemonsets/, revision=478
DEBU[2019-09-20T23:05:40.405422945-04:00] WATCH START id=63, key=/registry/replicasets/, revision=478
DEBU[2019-09-20T23:05:40.405599402-04:00] WATCH /registry/replicasets/, revision=478
DEBU[2019-09-20T23:05:40.406126221-04:00] WATCH LIST key=/registry/daemonsets/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.407034497-04:00] LIST /registry/controllerrevisions/, start=/registry/controllerrevisions/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.409520367-04:00] WATCH START id=64, key=/registry/controllerrevisions/, revision=478
DEBU[2019-09-20T23:05:40.409776615-04:00] WATCH /registry/controllerrevisions/, revision=478
DEBU[2019-09-20T23:05:40.410778171-04:00] LIST /registry/validatingwebhookconfigurations/, start=/registry/validatingwebhookconfigurations/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.411447907-04:00] WATCH LIST key=/registry/controllerrevisions/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.409578075-04:00] WATCH LIST key=/registry/replicasets/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.413237010-04:00] COUNT /registry/validatingwebhookconfigurations/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.416743447-04:00] LIST /registry/mutatingwebhookconfigurations/, start=/registry/mutatingwebhookconfigurations/, limit=10001, rev=0 => rev=477, kvs=0, err=<nil>
DEBU[2019-09-20T23:05:40.417044799-04:00] COUNT /registry/mutatingwebhookconfigurations/ => rev=477, count=0, err=<nil>
DEBU[2019-09-20T23:05:40.419400774-04:00] WATCH START id=65, key=/registry/mutatingwebhookconfigurations/, revision=478
DEBU[2019-09-20T23:05:40.419660876-04:00] WATCH /registry/mutatingwebhookconfigurations/, revision=478
DEBU[2019-09-20T23:05:40.419988218-04:00] WATCH START id=66, key=/registry/validatingwebhookconfigurations/, revision=478
DEBU[2019-09-20T23:05:40.420122071-04:00] WATCH /registry/validatingwebhookconfigurations/, revision=478
DEBU[2019-09-20T23:05:40.421391126-04:00] WATCH LIST key=/registry/mutatingwebhookconfigurations/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:40.422814815-04:00] COUNT /registry/events/ => rev=477, count=85, err=<nil>
DEBU[2019-09-20T23:05:40.423985745-04:00] WATCH LIST key=/registry/validatingwebhookconfigurations/ rev=477 => rev=0 kvs=0
W0920 23:05:41.080886    6964 genericapiserver.go:351] Skipping API batch/v2alpha1 because it has no resources.
W0920 23:05:41.126481    6964 genericapiserver.go:351] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0920 23:05:41.152284    6964 genericapiserver.go:351] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0920 23:05:41.157768    6964 genericapiserver.go:351] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0920 23:05:41.172827    6964 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
E0920 23:05:46.683220    6964 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
E0920 23:05:46.683391    6964 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
E0920 23:05:46.683521    6964 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
E0920 23:05:46.683635    6964 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
E0920 23:05:46.683788    6964 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
E0920 23:05:46.683907    6964 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
E0920 23:05:46.684013    6964 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
E0920 23:05:46.684131    6964 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
E0920 23:05:46.684585    6964 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
E0920 23:05:46.685157    6964 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
E0920 23:05:46.685374    6964 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
E0920 23:05:46.685505    6964 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
I0920 23:05:46.685625    6964 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
I0920 23:05:46.685684    6964 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
DEBU[2019-09-20T23:05:46.703812329-04:00] COUNT /registry/apiregistration.k8s.io/apiservices/ => rev=477, count=32, err=<nil>
DEBU[2019-09-20T23:05:46.706849289-04:00] LIST /registry/apiregistration.k8s.io/apiservices/, start=/registry/apiregistration.k8s.io/apiservices/, limit=10001, rev=0 => rev=477, kvs=32, err=<nil>
DEBU[2019-09-20T23:05:46.712886595-04:00] LIST /registry/apiregistration.k8s.io/apiservices/, start=/registry/apiregistration.k8s.io/apiservices/, limit=10001, rev=0 => rev=477, kvs=32, err=<nil>
DEBU[2019-09-20T23:05:46.722201537-04:00] WATCH START id=67, key=/registry/apiregistration.k8s.io/apiservices/, revision=478
DEBU[2019-09-20T23:05:46.722707471-04:00] WATCH /registry/apiregistration.k8s.io/apiservices/, revision=478
DEBU[2019-09-20T23:05:46.724464856-04:00] WATCH LIST key=/registry/apiregistration.k8s.io/apiservices/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:46.725023759-04:00] WATCH START id=68, key=/registry/apiregistration.k8s.io/apiservices/, revision=478
DEBU[2019-09-20T23:05:46.725232768-04:00] WATCH /registry/apiregistration.k8s.io/apiservices/, revision=478
DEBU[2019-09-20T23:05:46.726738696-04:00] WATCH LIST key=/registry/apiregistration.k8s.io/apiservices/ rev=477 => rev=0 kvs=0
DEBU[2019-09-20T23:05:46.732951053-04:00] COUNT /registry/apiregistration.k8s.io/apiservices/ => rev=477, count=32, err=<nil>
DEBU[2019-09-20T23:05:54.361514575-04:00] DELETE /registry/masterleases/192.168.0.5, rev=448 => rev=477, kv=true, deleted=false, err=<nil>
DEBU[2019-09-20T23:05:54.363687374-04:00] DELETE /registry/masterleases/192.168.0.5, rev=436 => rev=477, kv=true, deleted=false, err=<nil>
DEBU[2019-09-20T23:05:54.364358984-04:00] DELETE /registry/masterleases/192.168.0.5, rev=468 => rev=477, kv=true, deleted=false, err=<nil>
DEBU[2019-09-20T23:05:54.367894014-04:00] DELETE /registry/masterleases/192.168.0.5, rev=455 => rev=477, kv=true, deleted=false, err=<nil>
DEBU[2019-09-20T23:05:54.369684992-04:00] DELETE /registry/masterleases/192.168.0.5, rev=469 => rev=478, kv=true, deleted=true, err=<nil>
DEBU[2019-09-20T23:05:54.371250608-04:00] DELETE /registry/masterleases/192.168.0.5, rev=453 => rev=478, kv=true, deleted=true, err=<nil>
DEBU[2019-09-20T23:05:54.374021996-04:00] DELETE /registry/masterleases/192.168.0.5, rev=454 => rev=478, kv=true, deleted=true, err=<nil>
DEBU[2019-09-20T23:05:54.374736159-04:00] DELETE /registry/masterleases/192.168.0.5, rev=452 => rev=478, kv=true, deleted=true, err=<nil>
DEBU[2019-09-20T23:05:54.377532079-04:00] DELETE /registry/masterleases/192.168.0.5, rev=466 => rev=478, kv=true, deleted=true, err=<nil>
DEBU[2019-09-20T23:05:54.379736388-04:00] DELETE /registry/masterleases/192.168.0.5, rev=467 => rev=478, kv=true, deleted=true, err=<nil>
DEBU[2019-09-20T23:05:54.375271364-04:00] TRIGGERED /registry/masterleases/192.168.0.5, revision=478, delete=true
INFO[2019-09-20T23:05:58.570822071-04:00] Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0
INFO[2019-09-20T23:05:58.573329972-04:00] Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true
I0920 23:05:58.581565    6964 secure_serving.go:116] Serving securely on 127.0.0.1:6444
I0920 23:05:58.583905    6964 controller.go:81] Starting OpenAPI AggregationController
I0920 23:05:58.584372    6964 autoregister_controller.go:140] Starting autoregister controller
I0920 23:05:58.584814    6964 cache.go:32] Waiting for caches to sync for autoregister controller
I0920 23:05:58.585378    6964 crd_finalizer.go:255] Starting CRDFinalizer
I0920 23:05:58.585912    6964 available_controller.go:376] Starting AvailableConditionController
I0920 23:05:58.586266    6964 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0920 23:05:58.591531    6964 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0920 23:05:58.592095    6964 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0920 23:05:58.592558    6964 crdregistration_controller.go:112] Starting crd-autoregister controller
I0920 23:05:58.592935    6964 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
I0920 23:05:58.593307    6964 controller.go:83] Starting OpenAPI controller
I0920 23:05:58.593747    6964 customresource_discovery_controller.go:208] Starting DiscoveryController
I0920 23:05:58.594141    6964 naming_controller.go:288] Starting NamingConditionController
I0920 23:05:58.594547    6964 establishing_controller.go:73] Starting EstablishingController
I0920 23:05:58.594991    6964 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I0920 23:05:58.629862    6964 controllermanager.go:160] Version: v1.15.4-k3s.1
I0920 23:05:58.632586    6964 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
I0920 23:05:58.652436    6964 server.go:142] Version: v1.15.4-k3s.1
I0920 23:05:58.652621    6964 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0920 23:05:58.683138    6964 authorization.go:47] Authorization is disabled
W0920 23:05:58.683229    6964 authentication.go:55] Authentication is disabled
I0920 23:05:58.683326    6964 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
DEBU[2019-09-20T23:05:59.572928309-04:00] GET /registry/namespaces/default, rev=0 => rev=478, kv=true, err=<nil>
I0920 23:05:59.586904    6964 controller.go:107] OpenAPI AggregationController: Processing item
I0920 23:05:59.645932    6964 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0920 23:05:59.646080    6964 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0920 23:05:59.621945    6964 controller_utils.go:1036] Caches are synced for crd-autoregister controller
DEBU[2019-09-20T23:05:59.687378323-04:00] LIST /registry/apiextensions.k8s.io/customresourcedefinitions/, start=/registry/apiextensions.k8s.io/customresourcedefinitions/, limit=0, rev=0 => rev=478, kvs=3, err=<nil>
E0920 23:05:59.694506    6964 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0920 23:05:59.737430    6964 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0920 23:05:59.738426    6964 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0920 23:05:59.739369    6964 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0920 23:05:59.740231    6964 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
DEBU[2019-09-20T23:05:59.749523560-04:00] GET /registry/namespaces/kube-system, rev=0 => rev=478, kv=true, err=<nil>
DEBU[2019-09-20T23:05:59.760521303-04:00] LIST /registry/clusterroles/, start=/registry/clusterroles/, limit=0, rev=0 => rev=478, kvs=54, err=<nil>
E0920 23:05:59.778624    6964 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
DEBU[2019-09-20T23:05:59.781496172-04:00] GET /registry/priorityclasses/system-node-critical, rev=0 => rev=478, kv=true, err=<nil>
E0920 23:05:59.782224    6964 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
DEBU[2019-09-20T23:05:59.792668030-04:00] GET /registry/services/specs/default/kubernetes, rev=0 => rev=478, kv=true, err=<nil>
E0920 23:05:59.793758    6964 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
I0920 23:05:59.803397    6964 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0920 23:05:59.817832    6964 cache.go:39] Caches are synced for autoregister controller
I0920 23:05:59.826175    6964 cache.go:39] Caches are synced for AvailableConditionController controller
DEBU[2019-09-20T23:05:59.883889960-04:00] GET /registry/configmaps/kube-system/extension-apiserver-authentication, rev=0 => rev=478, kv=true, err=<nil>
DEBU[2019-09-20T23:05:59.884939172-04:00] LIST /registry/clusterrolebindings/, start=/registry/clusterrolebindings/, limit=0, rev=0 => rev=478, kvs=39, err=<nil>
DEBU[2019-09-20T23:05:59.888162745-04:00] GET /registry/priorityclasses/system-cluster-critical, rev=0 => rev=478, kv=true, err=<nil>
DEBU[2019-09-20T23:05:59.890464085-04:00] GET /registry/masterleases/192.168.0.5, rev=0 => rev=478, kv=false, err=<nil>
DEBU[2019-09-20T23:05:59.895129629-04:00] GET /registry/health, rev=0 => rev=478, kv=true, err=<nil>
DEBU[2019-09-20T23:05:59.900770896-04:00] LIST /registry/apiextensions.k8s.io/customresourcedefinitions/, start=/registry/apiextensions.k8s.io/customresourcedefinitions/, limit=0, rev=0 => rev=478, kvs=3, err=<nil>
I0920 23:05:59.904106    6964 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
DEBU[2019-09-20T23:05:59.908727409-04:00] GET /registry/clusterroles/system:aggregate-to-admin, rev=0 => rev=478, kv=true, err=<nil>
DEBU[2019-09-20T23:05:59.912992746-04:00] CREATE /registry/masterleases/192.168.0.5, size=66, lease=15 => rev=479, err=<nil>
DEBU[2019-09-20T23:05:59.915249399-04:00] TRIGGERED /registry/masterleases/192.168.0.5, revision=479, delete=false
DEBU[2019-09-20T23:05:59.929325509-04:00] GET /registry/clusterroles/system:aggregate-to-edit, rev=0 => rev=479, kv=true, err=<nil>
DEBU[2019-09-20T23:05:59.933174860-04:00] GET /registry/services/endpoints/default/kubernetes, rev=0 => rev=479, kv=true, err=<nil>
DEBU[2019-09-20T23:05:59.938195610-04:00] GET /registry/clusterroles/system:aggregate-to-view, rev=0 => rev=479, kv=true, err=<nil>
DEBU[2019-09-20T23:05:59.940403044-04:00] LIST /registry/masterleases/, start=/registry/masterleases/, limit=0, rev=0 => rev=479, kvs=1, err=<nil>
DEBU[2019-09-20T23:05:59.944914943-04:00] GET /registry/masterleases/192.168.0.5, rev=0 => rev=479, kv=true, err=<nil>
DEBU[2019-09-20T23:05:59.946661494-04:00] GET /registry/clusterrolebindings/system:discovery, rev=0 => rev=479, kv=true, err=<nil>
DEBU[2019-09-20T23:05:59.950135171-04:00] DELETE /registry/masterleases/192.168.0.5, rev=479 => rev=480, kv=true, deleted=true, err=<nil>
DEBU[2019-09-20T23:05:59.952211303-04:00] TRIGGERED /registry/masterleases/192.168.0.5, revision=480, delete=true
INFO[2019-09-20T23:05:59.955722063-04:00] Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.77.1.tgz
INFO[2019-09-20T23:05:59.957641582-04:00] Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml
DEBU[2019-09-20T23:05:59.957984601-04:00] GET /registry/clusterrolebindings/system:public-info-viewer, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:05:59.958485014-04:00] GET /registry/services/endpoints/default/kubernetes, rev=0 => rev=480, kv=true, err=<nil>
INFO[2019-09-20T23:05:59.960124483-04:00] Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml
INFO[2019-09-20T23:05:59.961547183-04:00] Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml
E0920 23:05:59.962749    6964 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_depth" is not a valid metric name
E0920 23:05:59.962963    6964 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_adds" is not a valid metric name
E0920 23:05:59.963276    6964 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=Addon before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_queue_latency" is not a valid metric name
E0920 23:05:59.963618    6964 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=Addon takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_work_duration" is not a valid metric name
E0920 23:05:59.963876    6964 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=Addon has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds" is not a valid metric name
E0920 23:05:59.964105    6964 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=Addon been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds" is not a valid metric name
E0920 23:05:59.964415    6964 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_retries" is not a valid metric name
DEBU[2019-09-20T23:05:59.965900124-04:00] LIST /registry/masterleases/, start=/registry/masterleases/, limit=0, rev=0 => rev=480, kvs=0, err=<nil>
E0920 23:05:59.968338    6964 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
DEBU[2019-09-20T23:05:59.970894624-04:00] GET /registry/clusterroles/cluster-admin, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:05:59.972932632-04:00] GET /registry/ranges/serviceips, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:05:59.978109579-04:00] GET /registry/clusterroles/system:discovery, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:05:59.985111931-04:00] GET /registry/clusterroles/system:basic-user, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:05:59.992115742-04:00] LIST /registry/services/specs/, start=/registry/services/specs/, limit=0, rev=0 => rev=480, kvs=2, err=<nil>
DEBU[2019-09-20T23:05:59.992115742-04:00] GET /registry/clusterroles/system:public-info-viewer, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:05:59.997426542-04:00] GET /registry/ranges/serviceips, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.001494277-04:00] GET /registry/ranges/servicenodeports, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.009473810-04:00] GET /registry/clusterroles/admin, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.011559683-04:00] LIST /registry/services/specs/, start=/registry/services/specs/, limit=0, rev=0 => rev=480, kvs=2, err=<nil>
DEBU[2019-09-20T23:06:00.018174381-04:00] COUNT /registry/k3s.cattle.io/listenerconfigs/ => rev=480, count=1, err=<nil>
DEBU[2019-09-20T23:06:00.024483091-04:00] LIST /registry/k3s.cattle.io/listenerconfigs/, start=/registry/k3s.cattle.io/listenerconfigs/, limit=10001, rev=0 => rev=480, kvs=1, err=<nil>
DEBU[2019-09-20T23:06:00.026162091-04:00] GET /registry/k3s.cattle.io/listenerconfigs/kube-system/tls-config, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.029972745-04:00] GET /registry/ranges/servicenodeports, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.032781112-04:00] GET /registry/clusterroles/edit, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.041759025-04:00] GET /registry/ranges/servicenodeports, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.048295234-04:00] GET /registry/namespaces/kube-system, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.048355859-04:00] GET /registry/health, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.052618332-04:00] WATCH START id=69, key=/registry/k3s.cattle.io/listenerconfigs/, revision=481
DEBU[2019-09-20T23:06:00.053592232-04:00] WATCH /registry/k3s.cattle.io/listenerconfigs/, revision=481
DEBU[2019-09-20T23:06:00.054688059-04:00] GET /registry/ranges/serviceips, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.057689342-04:00] WATCH LIST key=/registry/k3s.cattle.io/listenerconfigs/ rev=480 => rev=0 kvs=0
DEBU[2019-09-20T23:06:00.061667651-04:00] GET /registry/clusterroles/view, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.073983876-04:00] GET /registry/namespaces/kube-public, rev=0 => rev=480, kv=true, err=<nil>
INFO[2019-09-20T23:06:00.075999072-04:00] Listening on :6443
E0920 23:06:00.077306    6964 prometheus.go:152] failed to register depth metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_depth" is not a valid metric name
E0920 23:06:00.077581    6964 prometheus.go:164] failed to register adds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_adds" is not a valid metric name
E0920 23:06:00.078068    6964 prometheus.go:176] failed to register latency metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=ListenerConfig before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency" is not a valid metric name
E0920 23:06:00.078923    6964 prometheus.go:188] failed to register work_duration metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=ListenerConfig takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration" is not a valid metric name
E0920 23:06:00.079284    6964 prometheus.go:203] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=ListenerConfig has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds" is not a valid metric name
E0920 23:06:00.079669    6964 prometheus.go:216] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=ListenerConfig been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds" is not a valid metric name
E0920 23:06:00.080222    6964 prometheus.go:228] failed to register retries metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_retries" is not a valid metric name
DEBU[2019-09-20T23:06:00.083740325-04:00] LIST /registry/services/specs/, start=/registry/services/specs/, limit=0, rev=0 => rev=480, kvs=2, err=<nil>
DEBU[2019-09-20T23:06:00.086330413-04:00] GET /registry/clusterroles/system:aggregate-to-admin, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.091515902-04:00] LIST /registry/services/specs/, start=/registry/services/specs/, limit=0, rev=0 => rev=480, kvs=2, err=<nil>
DEBU[2019-09-20T23:06:00.103710669-04:00] GET /registry/namespaces/kube-node-lease, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.106463673-04:00] GET /registry/ranges/serviceips, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.119085625-04:00] GET /registry/ranges/servicenodeports, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.126538547-04:00] GET /registry/clusterroles/system:aggregate-to-edit, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.162586968-04:00] GET /registry/clusterroles/system:aggregate-to-view, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.170832281-04:00] LIST /registry/k3s.cattle.io/addons/, start=/registry/k3s.cattle.io/addons/, limit=501, rev=0 => rev=480, kvs=3, err=<nil>
DEBU[2019-09-20T23:06:00.173172944-04:00] COUNT /registry/k3s.cattle.io/addons/ => rev=480, count=3, err=<nil>
DEBU[2019-09-20T23:06:00.178683847-04:00] LIST /registry/k3s.cattle.io/addons/, start=/registry/k3s.cattle.io/addons/, limit=10001, rev=0 => rev=480, kvs=3, err=<nil>
DEBU[2019-09-20T23:06:00.188412588-04:00] GET /registry/health, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.192990059-04:00] WATCH START id=70, key=/registry/k3s.cattle.io/addons/, revision=481
DEBU[2019-09-20T23:06:00.193228339-04:00] WATCH /registry/k3s.cattle.io/addons/, revision=481
DEBU[2019-09-20T23:06:00.195878895-04:00] WATCH LIST key=/registry/k3s.cattle.io/addons/ rev=480 => rev=0 kvs=0
DEBU[2019-09-20T23:06:00.197953414-04:00] GET /registry/clusterroles/system:heapster, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.211502131-04:00] GET /registry/clusterroles/system:node, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.232224866-04:00] GET /registry/clusterroles/system:node-problem-detector, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.244887391-04:00] GET /registry/clusterroles/system:node-proxier, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.256497006-04:00] GET /registry/clusterroles/system:kubelet-api-admin, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.268526723-04:00] GET /registry/clusterroles/system:node-bootstrapper, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.282849185-04:00] GET /registry/clusterroles/system:auth-delegator, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.286439111-04:00] GET /registry/health, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.303120986-04:00] GET /registry/clusterroles/system:kube-aggregator, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.315343565-04:00] GET /registry/clusterroles/system:kube-controller-manager, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.328177079-04:00] GET /registry/clusterroles/system:kube-scheduler, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.344621247-04:00] GET /registry/clusterroles/system:kube-dns, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.355972218-04:00] GET /registry/clusterroles/system:persistent-volume-provisioner, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.365706897-04:00] GET /registry/health, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.369915516-04:00] GET /registry/clusterroles/system:csi-external-attacher, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.382097315-04:00] GET /registry/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.393087559-04:00] GET /registry/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.404615716-04:00] GET /registry/clusterroles/system:volume-scheduler, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.416276737-04:00] GET /registry/clusterroles/system:csi-external-provisioner, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.429748892-04:00] GET /registry/clusterroles/system:controller:attachdetach-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.442798759-04:00] GET /registry/clusterroles/system:controller:clusterrole-aggregation-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.454035199-04:00] GET /registry/clusterroles/system:controller:cronjob-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.466926472-04:00] GET /registry/clusterroles/system:controller:daemon-set-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.467009389-04:00] GET /registry/health, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.489983203-04:00] GET /registry/clusterroles/system:controller:deployment-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.506831327-04:00] GET /registry/clusterroles/system:controller:disruption-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.519826038-04:00] GET /registry/clusterroles/system:controller:endpoint-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.531754921-04:00] GET /registry/clusterroles/system:controller:expand-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.544347082-04:00] GET /registry/clusterroles/system:controller:generic-garbage-collector, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.555914510-04:00] GET /registry/clusterroles/system:controller:horizontal-pod-autoscaler, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.566662516-04:00] GET /registry/health, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.569633070-04:00] GET /registry/clusterroles/system:controller:job-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.582810384-04:00] GET /registry/clusterroles/system:controller:namespace-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.594871975-04:00] GET /registry/clusterroles/system:controller:node-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.606896119-04:00] GET /registry/clusterroles/system:controller:persistent-volume-binder, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.621304466-04:00] GET /registry/clusterroles/system:controller:pod-garbage-collector, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.632859967-04:00] GET /registry/clusterroles/system:controller:replicaset-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.645136036-04:00] GET /registry/clusterroles/system:controller:replication-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.675517044-04:00] GET /registry/health, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:00.679040355-04:00] GET /registry/clusterroles/system:controller:resourcequota-controller, rev=0 => rev=480, kv=true, err=<nil>
INFO[2019-09-20T23:06:00.701241727-04:00] Starting k3s.cattle.io/v1, Kind=Addon controller
DEBU[2019-09-20T23:06:00.762720145-04:00] GET /registry/clusterroles/system:controller:route-controller, rev=0 => rev=480, kv=true, err=<nil>
INFO[2019-09-20T23:06:00.812918633-04:00] Node token is available at /var/lib/rancher/k3s/server/node-token
INFO[2019-09-20T23:06:00.813309412-04:00] To join node to cluster: k3s agent -s https://192.168.0.5:6443 -t ${NODE_TOKEN}
INFO[2019-09-20T23:06:00.817612666-04:00] Starting k3s.cattle.io/v1, Kind=ListenerConfig controller
E0920 23:06:00.827524    6964 prometheus.go:152] failed to register depth metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_depth" is not a valid metric name
E0920 23:06:00.828814    6964 prometheus.go:164] failed to register adds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_adds" is not a valid metric name
E0920 23:06:00.868936    6964 prometheus.go:176] failed to register latency metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Node before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_queue_latency" is not a valid metric name
E0920 23:06:00.869674    6964 prometheus.go:188] failed to register work_duration metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Node takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_work_duration" is not a valid metric name
E0920 23:06:00.870202    6964 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Node has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_unfinished_work_seconds" is not a valid metric name
E0920 23:06:00.870707    6964 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Node been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_longest_running_processor_microseconds" is not a valid metric name
E0920 23:06:00.871468    6964 prometheus.go:228] failed to register retries metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_retries" is not a valid metric name
DEBU[2019-09-20T23:06:00.872293262-04:00] GET /registry/health, rev=0 => rev=480, kv=true, err=<nil>
E0920 23:06:00.874046    6964 prometheus.go:152] failed to register depth metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_depth", help: "(Deprecated) Current depth of workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_depth" is not a valid metric name
E0920 23:06:00.874461    6964 prometheus.go:164] failed to register adds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_adds", help: "(Deprecated) Total number of adds handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_adds" is not a valid metric name
E0920 23:06:00.884444    6964 prometheus.go:176] failed to register latency metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_queue_latency", help: "(Deprecated) How long an item stays in workqueuebatch/v1, Kind=Job before being requested.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_queue_latency" is not a valid metric name
E0920 23:06:00.923480    6964 prometheus.go:188] failed to register work_duration metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_work_duration", help: "(Deprecated) How long processing an item from workqueuebatch/v1, Kind=Job takes.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_work_duration" is not a valid metric name
E0920 23:06:00.924043    6964 prometheus.go:203] failed to register unfinished_work_seconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_unfinished_work_seconds", help: "(Deprecated) How many seconds of work batch/v1, Kind=Job has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_unfinished_work_seconds" is not a valid metric name
E0920 23:06:00.924439    6964 prometheus.go:216] failed to register longest_running_processor_microseconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for batch/v1, Kind=Job been running.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_longest_running_processor_microseconds" is not a valid metric name
E0920 23:06:00.925052    6964 prometheus.go:228] failed to register retries metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_retries", help: "(Deprecated) Total number of retries handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_retries" is not a valid metric name
E0920 23:06:00.927333    6964 prometheus.go:152] failed to register depth metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_depth", help: "(Deprecated) Current depth of workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_depth" is not a valid metric name
E0920 23:06:00.930853    6964 prometheus.go:164] failed to register adds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_adds", help: "(Deprecated) Total number of adds handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_adds" is not a valid metric name
E0920 23:06:00.937744    6964 prometheus.go:176] failed to register latency metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_queue_latency", help: "(Deprecated) How long an item stays in workqueuehelm.cattle.io/v1, Kind=HelmChart before being requested.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_queue_latency" is not a valid metric name
E0920 23:06:00.945146    6964 prometheus.go:188] failed to register work_duration metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_work_duration", help: "(Deprecated) How long processing an item from workqueuehelm.cattle.io/v1, Kind=HelmChart takes.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_work_duration" is not a valid metric name
E0920 23:06:00.946198    6964 prometheus.go:203] failed to register unfinished_work_seconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds", help: "(Deprecated) How many seconds of work helm.cattle.io/v1, Kind=HelmChart has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds" is not a valid metric name
E0920 23:06:00.947091    6964 prometheus.go:216] failed to register longest_running_processor_microseconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for helm.cattle.io/v1, Kind=HelmChart been running.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds" is not a valid metric name
E0920 23:06:00.988721    6964 prometheus.go:228] failed to register retries metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_retries", help: "(Deprecated) Total number of retries handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_retries" is not a valid metric name
E0920 23:06:00.993787    6964 prometheus.go:152] failed to register depth metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_depth" is not a valid metric name
E0920 23:06:00.994225    6964 prometheus.go:164] failed to register adds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_adds" is not a valid metric name
E0920 23:06:01.001872    6964 prometheus.go:176] failed to register latency metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Service before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_queue_latency" is not a valid metric name
E0920 23:06:01.069332    6964 prometheus.go:188] failed to register work_duration metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Service takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_work_duration" is not a valid metric name
E0920 23:06:01.069899    6964 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Service has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_unfinished_work_seconds" is not a valid metric name
E0920 23:06:01.070305    6964 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Service been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_longest_running_processor_microseconds" is not a valid metric name
E0920 23:06:01.070901    6964 prometheus.go:228] failed to register retries metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_retries" is not a valid metric name
E0920 23:06:01.073830    6964 prometheus.go:152] failed to register depth metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_depth" is not a valid metric name
E0920 23:06:01.074223    6964 prometheus.go:164] failed to register adds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_adds" is not a valid metric name
E0920 23:06:01.080573    6964 prometheus.go:176] failed to register latency metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Pod before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_queue_latency" is not a valid metric name
E0920 23:06:01.101878    6964 prometheus.go:188] failed to register work_duration metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Pod takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_work_duration" is not a valid metric name
E0920 23:06:01.102238    6964 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Pod has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_unfinished_work_seconds" is not a valid metric name
E0920 23:06:01.102486    6964 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Pod been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_longest_running_processor_microseconds" is not a valid metric name
E0920 23:06:01.102901    6964 prometheus.go:228] failed to register retries metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_retries" is not a valid metric name
DEBU[2019-09-20T23:06:01.101972348-04:00] GET /registry/clusterroles/system:controller:service-account-controller, rev=0 => rev=480, kv=true, err=<nil>
E0920 23:06:01.104030    6964 prometheus.go:152] failed to register depth metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_depth" is not a valid metric name
E0920 23:06:01.104368    6964 prometheus.go:164] failed to register adds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_adds" is not a valid metric name
E0920 23:06:01.105499    6964 prometheus.go:176] failed to register latency metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Endpoints before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_queue_latency" is not a valid metric name
E0920 23:06:01.106209    6964 prometheus.go:188] failed to register work_duration metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Endpoints takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_work_duration" is not a valid metric name
E0920 23:06:01.106482    6964 prometheus.go:203] failed to register unfinished_work_seconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Endpoints has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_unfinished_work_seconds" is not a valid metric name
E0920 23:06:01.106714    6964 prometheus.go:216] failed to register longest_running_processor_microseconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Endpoints been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_longest_running_processor_microseconds" is not a valid metric name
E0920 23:06:01.107062    6964 prometheus.go:228] failed to register retries metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_retries" is not a valid metric name
DEBU[2019-09-20T23:06:01.333246423-04:00] GET /registry/clusterroles/system:controller:service-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:01.540415025-04:00] GET /registry/health, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:01.676872453-04:00] GET /registry/health, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:01.748627161-04:00] LIST /registry/serviceaccounts/, start=/registry/serviceaccounts/, limit=0, rev=0 => rev=480, kvs=31, err=<nil>
DEBU[2019-09-20T23:06:01.829443427-04:00] LIST /registry/helm.cattle.io/helmcharts/, start=/registry/helm.cattle.io/helmcharts/, limit=10001, rev=0 => rev=480, kvs=1, err=<nil>
DEBU[2019-09-20T23:06:01.854662019-04:00] GET /registry/clusterroles/system:controller:statefulset-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:01.855109777-04:00] DesiredSet - No change(2) /v1, Kind=ServiceAccount kube-system/coredns for  kube-system/coredns
DEBU[2019-09-20T23:06:01.874936736-04:00] LIST /registry/helm.cattle.io/helmcharts/, start=/registry/helm.cattle.io/helmcharts/, limit=501, rev=0 => rev=480, kvs=1, err=<nil>
DEBU[2019-09-20T23:06:01.881010969-04:00] COUNT /registry/helm.cattle.io/helmcharts/ => rev=480, count=1, err=<nil>
DEBU[2019-09-20T23:06:01.906162791-04:00] GET /registry/health, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:01.969450573-04:00] WATCH START id=71, key=/registry/helm.cattle.io/helmcharts/, revision=481
DEBU[2019-09-20T23:06:01.969790154-04:00] WATCH /registry/helm.cattle.io/helmcharts/, revision=481
DEBU[2019-09-20T23:06:01.972462116-04:00] WATCH LIST key=/registry/helm.cattle.io/helmcharts/ rev=480 => rev=0 kvs=0
DEBU[2019-09-20T23:06:02.003433017-04:00] GET /registry/clusterroles/system:controller:ttl-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.127492137-04:00] GET /registry/clusterroles/system:controller:certificate-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.215202058-04:00] GET /registry/health, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.220388848-04:00] GET /registry/health, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.239860185-04:00] GET /registry/clusterroles/system:controller:pvc-protection-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.260101621-04:00] LIST /registry/clusterroles/, start=/registry/clusterroles/, limit=0, rev=0 => rev=480, kvs=54, err=<nil>
DEBU[2019-09-20T23:06:02.276707455-04:00] GET /registry/clusterroles/system:controller:pv-protection-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.285893387-04:00] DesiredSet - No change(2) rbac.authorization.k8s.io/v1beta1, Kind=ClusterRole /system:coredns for  kube-system/coredns
DEBU[2019-09-20T23:06:02.296359572-04:00] GET /registry/clusterrolebindings/cluster-admin, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.336171146-04:00] GET /registry/clusterrolebindings/system:discovery, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.348497058-04:00] GET /registry/clusterrolebindings/system:basic-user, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.360617816-04:00] GET /registry/clusterrolebindings/system:public-info-viewer, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.369956299-04:00] GET /registry/health, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.381922631-04:00] GET /registry/clusterrolebindings/system:node-proxier, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.393964430-04:00] GET /registry/clusterrolebindings/system:kube-controller-manager, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.406168573-04:00] GET /registry/clusterrolebindings/system:kube-dns, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.418566725-04:00] GET /registry/clusterrolebindings/system:kube-scheduler, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.431802163-04:00] GET /registry/clusterrolebindings/system:volume-scheduler, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.444356824-04:00] GET /registry/clusterrolebindings/system:node, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.447513419-04:00] LIST /registry/clusterrolebindings/, start=/registry/clusterrolebindings/, limit=0, rev=0 => rev=480, kvs=39, err=<nil>
DEBU[2019-09-20T23:06:02.456506175-04:00] GET /registry/clusterrolebindings/system:controller:attachdetach-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.465159194-04:00] DesiredSet - No change(2) rbac.authorization.k8s.io/v1beta1, Kind=ClusterRoleBinding /system:coredns for  kube-system/coredns
DEBU[2019-09-20T23:06:02.469497708-04:00] GET /registry/health, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.476063032-04:00] GET /registry/clusterrolebindings/system:controller:clusterrole-aggregation-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.494016670-04:00] GET /registry/clusterrolebindings/system:controller:cronjob-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.505818367-04:00] GET /registry/clusterrolebindings/system:controller:daemon-set-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.517645116-04:00] GET /registry/clusterrolebindings/system:controller:deployment-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.529235408-04:00] GET /registry/clusterrolebindings/system:controller:disruption-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.540757315-04:00] GET /registry/clusterrolebindings/system:controller:endpoint-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.553072499-04:00] GET /registry/clusterrolebindings/system:controller:expand-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.573326435-04:00] GET /registry/health, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.584515219-04:00] GET /registry/clusterrolebindings/system:controller:generic-garbage-collector, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.597052849-04:00] GET /registry/clusterrolebindings/system:controller:horizontal-pod-autoscaler, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.609390688-04:00] GET /registry/clusterrolebindings/system:controller:job-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.621769153-04:00] GET /registry/clusterrolebindings/system:controller:namespace-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.633495069-04:00] GET /registry/clusterrolebindings/system:controller:node-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.646243271-04:00] GET /registry/clusterrolebindings/system:controller:persistent-volume-binder, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.646717643-04:00] LIST /registry/configmaps/, start=/registry/configmaps/, limit=0, rev=0 => rev=480, kvs=2, err=<nil>
DEBU[2019-09-20T23:06:02.657140338-04:00] DesiredSet - No change(2) /v1, Kind=ConfigMap kube-system/coredns for  kube-system/coredns
DEBU[2019-09-20T23:06:02.659754540-04:00] GET /registry/clusterrolebindings/system:controller:pod-garbage-collector, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.666022366-04:00] GET /registry/health, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.682859031-04:00] GET /registry/clusterrolebindings/system:controller:replicaset-controller, rev=0 => rev=480, kv=true, err=<nil>
INFO[2019-09-20T23:06:02.683657724-04:00] Starting helm.cattle.io/v1, Kind=HelmChart controller
DEBU[2019-09-20T23:06:02.696664570-04:00] GET /registry/clusterrolebindings/system:controller:replication-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.708310591-04:00] GET /registry/clusterrolebindings/system:controller:resourcequota-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.726365739-04:00] GET /registry/clusterrolebindings/system:controller:route-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.736874215-04:00] DesiredSet - No change(2) /v1, Kind=ServiceAccount kube-system/helm-traefik for helm-controller kube-system/traefik
DEBU[2019-09-20T23:06:02.739514823-04:00] GET /registry/clusterrolebindings/system:controller:service-account-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.750970116-04:00] GET /registry/clusterrolebindings/system:controller:service-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.762692282-04:00] GET /registry/clusterrolebindings/system:controller:statefulset-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.784558031-04:00] GET /registry/clusterrolebindings/system:controller:ttl-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.804911237-04:00] GET /registry/health, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.830061549-04:00] LIST /registry/deployments/, start=/registry/deployments/, limit=0, rev=0 => rev=480, kvs=1, err=<nil>
DEBU[2019-09-20T23:06:02.835667139-04:00] GET /registry/clusterrolebindings/system:controller:certificate-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.853803015-04:00] DesiredSet - No change(2) rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding /helm-kube-system-traefik for helm-controller kube-system/traefik
DEBU[2019-09-20T23:06:02.883566683-04:00] DesiredSet - No change(2) extensions/v1beta1, Kind=Deployment kube-system/coredns for  kube-system/coredns
DEBU[2019-09-20T23:06:02.889333626-04:00] GET /registry/clusterrolebindings/system:controller:pvc-protection-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.895104632-04:00] GET /registry/health, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.910958960-04:00] GET /registry/clusterrolebindings/system:controller:pv-protection-controller, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.916434238-04:00] 2019/09/20 23:06:02 http: TLS handshake error from 127.0.0.1:55258: remote error: tls: bad certificate
DEBU[2019-09-20T23:06:02.924236638-04:00] GET /registry/roles/kube-system/extension-apiserver-authentication-reader, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.946899881-04:00] GET /registry/roles/kube-system/system:controller:bootstrap-signer, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.957889240-04:00] DesiredSet - No change(2) batch/v1, Kind=Job kube-system/helm-install-traefik for helm-controller kube-system/traefik
DEBU[2019-09-20T23:06:02.977422503-04:00] GET /registry/health, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:02.995923169-04:00] GET /registry/roles/kube-system/system:controller:cloud-provider, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:03.014657218-04:00] GET /registry/roles/kube-system/system:controller:token-cleaner, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:03.030357901-04:00] GET /registry/health, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:03.043112301-04:00] GET /registry/roles/kube-system/system::leader-locking-kube-controller-manager, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:03.054524938-04:00] GET /registry/roles/kube-system/system::leader-locking-kube-scheduler, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:03.072023422-04:00] GET /registry/roles/kube-public/system:controller:bootstrap-signer, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:03.080522901-04:00] GET /registry/health, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:03.092887146-04:00] GET /registry/rolebindings/kube-system/system::extension-apiserver-authentication-reader, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:03.105601650-04:00] GET /registry/rolebindings/kube-system/system::leader-locking-kube-controller-manager, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:03.118128238-04:00] GET /registry/rolebindings/kube-system/system::leader-locking-kube-scheduler, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:03.160311412-04:00] GET /registry/rolebindings/kube-system/system:controller:bootstrap-signer, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:03.186797184-04:00] GET /registry/health, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:03.201185949-04:00] GET /registry/rolebindings/kube-system/system:controller:cloud-provider, rev=0 => rev=480, kv=true, err=<nil>
INFO[2019-09-20T23:06:03.214095972-04:00] Starting batch/v1, Kind=Job controller
I0920 23:06:03.217195    6964 controller.go:606] quota admission added evaluator for: helmcharts.helm.cattle.io
DEBU[2019-09-20T23:06:03.226089230-04:00] GET /registry/helm.cattle.io/helmcharts/kube-system/traefik, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:03.231326854-04:00] LIST /registry/services/specs/, start=/registry/services/specs/, limit=0, rev=0 => rev=480, kvs=2, err=<nil>
DEBU[2019-09-20T23:06:03.234986154-04:00] GET /registry/rolebindings/kube-system/system:controller:token-cleaner, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:03.248647579-04:00] GET /registry/rolebindings/kube-public/system:controller:bootstrap-signer, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:03.250403662-04:00] DesiredSet - No change(2) /v1, Kind=ServiceAccount kube-system/helm-traefik for helm-controller kube-system/traefik
DEBU[2019-09-20T23:06:03.254579625-04:00] DesiredSet - No change(2) rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding /helm-kube-system-traefik for helm-controller kube-system/traefik
DEBU[2019-09-20T23:06:03.262797386-04:00] DesiredSet - No change(2) /v1, Kind=Service kube-system/kube-dns for  kube-system/coredns
INFO[2019-09-20T23:06:03.262915980-04:00] Wrote kubeconfig /etc/rancher/k3s/k3s.yaml
INFO[2019-09-20T23:06:03.263681443-04:00] Run: k3s kubectl
INFO[2019-09-20T23:06:03.263815766-04:00] k3s is up and running
DEBU[2019-09-20T23:06:03.270119164-04:00] DesiredSet - No change(2) batch/v1, Kind=Job kube-system/helm-install-traefik for helm-controller kube-system/traefik
DEBU[2019-09-20T23:06:03.289012014-04:00] GET /registry/health, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:03.303131145-04:00] 2019/09/20 23:06:03 http: TLS handshake error from 127.0.0.1:55286: remote error: tls: bad certificate
DEBU[2019-09-20T23:06:03.311536249-04:00] GET /registry/namespaces/default, rev=0 => rev=480, kv=true, err=<nil>
I0920 23:06:03.334561    6964 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
DEBU[2019-09-20T23:06:03.338719100-04:00] GET /registry/k3s.cattle.io/addons/kube-system/coredns, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:03.344066149-04:00] GET /registry/services/specs/default/kubernetes, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:03.347440764-04:00] GET /registry/helm.cattle.io/helmcharts/kube-system/traefik, rev=0 => rev=480, kv=true, err=<nil>
DEBU[2019-09-20T23:06:03.355886753-04:00] GET /registry/masterleases/192.168.0.5, rev=0 => rev=480, kv=false, err=<nil>
DEBU[2019-09-20T23:06:03.362396972-04:00] CREATE /registry/masterleases/192.168.0.5, size=66, lease=15 => rev=481, err=<nil>
DEBU[2019-09-20T23:06:03.364446595-04:00] TRIGGERED /registry/masterleases/192.168.0.5, revision=481, delete=false
DEBU[2019-09-20T23:06:03.372883157-04:00] GET /registry/services/endpoints/default/kubernetes, rev=0 => rev=481, kv=true, err=<nil>
DEBU[2019-09-20T23:06:03.378254946-04:00] LIST /registry/masterleases/, start=/registry/masterleases/, limit=0, rev=0 => rev=481, kvs=1, err=<nil>
DEBU[2019-09-20T23:06:03.400835066-04:00] LIST /registry/clusterrolebindings/, start=/registry/clusterrolebindings/, limit=0, rev=0 => rev=481, kvs=39, err=<nil>
DEBU[2019-09-20T23:06:03.409179493-04:00] DesiredSet - No change(2) rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding /kube-apiserver-kubelet-admin for  kube-system/rolebindings
DEBU[2019-09-20T23:06:03.444861301-04:00] 2019/09/20 23:06:03 http: TLS handshake error from 127.0.0.1:55292: remote error: tls: bad certificate
DEBU[2019-09-20T23:06:03.447513211-04:00] GET /registry/k3s.cattle.io/addons/kube-system/rolebindings, rev=0 => rev=481, kv=true, err=<nil>
DEBU[2019-09-20T23:06:03.497156339-04:00] LIST /registry/helm.cattle.io/helmcharts/, start=/registry/helm.cattle.io/helmcharts/, limit=0, rev=0 => rev=481, kvs=1, err=<nil>
DEBU[2019-09-20T23:06:03.504582699-04:00] DesiredSet - No change(2) helm.cattle.io/v1, Kind=HelmChart kube-system/traefik for  kube-system/traefik
DEBU[2019-09-20T23:06:03.542032308-04:00] GET /registry/k3s.cattle.io/addons/kube-system/traefik, rev=0 => rev=481, kv=true, err=<nil>
INFO[2019-09-20T23:06:03.917310901-04:00] Starting /v1, Kind=Pod controller
DEBU[2019-09-20T23:06:03.960785837-04:00] GET /registry/health, rev=0 => rev=481, kv=true, err=<nil>
INFO[2019-09-20T23:06:04.025464235-04:00] Starting /v1, Kind=Endpoints controller
INFO[2019-09-20T23:06:04.046710821-04:00] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log
INFO[2019-09-20T23:06:04.051446104-04:00] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd
INFO[2019-09-20T23:06:04.068744173-04:00] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused"
INFO[2019-09-20T23:06:04.131103523-04:00] Starting /v1, Kind=Node controller
INFO[2019-09-20T23:06:04.273795548-04:00] Starting /v1, Kind=Service controller
I0920 23:06:04.712228    6964 plugins.go:103] No cloud provider specified.
I0920 23:06:04.741719    6964 controller_utils.go:1029] Waiting for caches to sync for tokens controller
DEBU[2019-09-20T23:06:04.823567217-04:00] GET /registry/serviceaccounts/kube-system/namespace-controller, rev=0 => rev=481, kv=true, err=<nil>
I0920 23:06:04.946895    6964 controller_utils.go:1036] Caches are synced for tokens controller
INFO[2019-09-20T23:06:05.070977442-04:00] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused"
I0920 23:06:05.264574    6964 controllermanager.go:528] Started "namespace"
I0920 23:06:05.264842    6964 namespace_controller.go:186] Starting namespace controller
I0920 23:06:05.266322    6964 controller_utils.go:1029] Waiting for caches to sync for namespace controller
DEBU[2019-09-20T23:06:05.336051306-04:00] GET /registry/serviceaccounts/kube-system/certificate-controller, rev=0 => rev=481, kv=true, err=<nil>
I0920 23:06:05.554248    6964 controllermanager.go:528] Started "csrsigning"
I0920 23:06:05.554331    6964 certificate_controller.go:113] Starting certificate controller
I0920 23:06:05.555650    6964 controller_utils.go:1029] Waiting for caches to sync for certificate controller
DEBU[2019-09-20T23:06:05.628662342-04:00] GET /registry/serviceaccounts/kube-system/certificate-controller, rev=0 => rev=481, kv=true, err=<nil>
I0920 23:06:05.915290    6964 cleaner.go:81] Starting CSR cleaner controller
I0920 23:06:05.915514    6964 controllermanager.go:528] Started "csrcleaner"
DEBU[2019-09-20T23:06:05.986834271-04:00] GET /registry/serviceaccounts/kube-system/ttl-controller, rev=0 => rev=481, kv=true, err=<nil>
INFO[2019-09-20T23:06:06.073353834-04:00] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused"
I0920 23:06:06.176196    6964 controllermanager.go:528] Started "ttl"
I0920 23:06:06.176281    6964 ttl_controller.go:116] Starting TTL controller
I0920 23:06:06.181160    6964 controller_utils.go:1029] Waiting for caches to sync for TTL controller
DEBU[2019-09-20T23:06:06.253762707-04:00] GET /registry/serviceaccounts/kube-system/attachdetach-controller, rev=0 => rev=481, kv=true, err=<nil>
I0920 23:06:06.487140    6964 controllermanager.go:528] Started "attachdetach"
I0920 23:06:06.488748    6964 attach_detach_controller.go:335] Starting attach detach controller
I0920 23:06:06.500757    6964 controller_utils.go:1029] Waiting for caches to sync for attach detach controller
DEBU[2019-09-20T23:06:06.575298042-04:00] GET /registry/serviceaccounts/kube-system/clusterrole-aggregation-controller, rev=0 => rev=481, kv=true, err=<nil>
I0920 23:06:06.821550    6964 controllermanager.go:528] Started "clusterrole-aggregation"
I0920 23:06:06.821615    6964 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
I0920 23:06:06.830394    6964 controller_utils.go:1029] Waiting for caches to sync for ClusterRoleAggregator controller
DEBU[2019-09-20T23:06:06.942112624-04:00] GET /registry/serviceaccounts/kube-system/resourcequota-controller, rev=0 => rev=481, kv=true, err=<nil>
INFO[2019-09-20T23:06:07.084576213-04:00] module br_netfilter was already loaded
INFO[2019-09-20T23:06:07.085129283-04:00] module overlay was already loaded
INFO[2019-09-20T23:06:07.085449750-04:00] module nf_conntrack was already loaded
DEBU[2019-09-20T23:06:07.201291370-04:00] GET /registry/services/endpoints/default/kubernetes, rev=0 => rev=481, kv=true, err=<nil>
INFO[2019-09-20T23:06:07.216003361-04:00] Connecting to proxy                           url="wss://192.168.0.5:6443/v1-k3s/connect"
INFO[2019-09-20T23:06:07.323116962-04:00] Handling backend connection request [pi3]
WARN[2019-09-20T23:06:07.354637234-04:00] Disabling CPU quotas due to missing cpu.cfs_period_us
INFO[2019-09-20T23:06:07.362069167-04:00] Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cert-dir=/var/lib/rancher/k3s/agent/kubelet/pki --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/cfe4c1b5d345ab8303068b5b17b73614db9938dae887922f5cd8e7b2b61c0fd6/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=pi3 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/systemd/user.slice/user-0.slice --node-labels=node-role.kubernetes.io/master=true --read-only-port=0 --resolv-conf=/etc/resolv.conf --root-dir=/var/lib/rancher/k3s/agent/kubelet --runtime-cgroups=/systemd/user.slice/user-0.slice --seccomp-profile-root=/var/lib/rancher/k3s/agent/kubelet/seccomp --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key
W0920 23:06:07.366750    6964 options.go:251] unknown 'kubernetes.io' or 'k8s.io' labels specified with --node-labels: [node-role.kubernetes.io/master]
W0920 23:06:07.389284    6964 options.go:252] in 1.16, --node-labels in the 'kubernetes.io' namespace must begin with an allowed prefix (kubelet.kubernetes.io, node.kubernetes.io) or be in the specifically allowed set (beta.kubernetes.io/arch, beta.kubernetes.io/instance-type, beta.kubernetes.io/os, failure-domain.beta.kubernetes.io/region, failure-domain.beta.kubernetes.io/zone, failure-domain.kubernetes.io/region, failure-domain.kubernetes.io/zone, kubernetes.io/arch, kubernetes.io/hostname, kubernetes.io/instance-type, kubernetes.io/os)
W0920 23:06:07.430465    6964 server.go:216] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
I0920 23:06:07.506784    6964 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.extensions
I0920 23:06:07.514931    6964 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.extensions
I0920 23:06:07.515708    6964 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps
I0920 23:06:07.516452    6964 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
I0920 23:06:07.517193    6964 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for addons.k3s.cattle.io
I0920 23:06:07.517872    6964 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions
I0920 23:06:07.518357    6964 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
I0920 23:06:07.518755    6964 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints
I0920 23:06:07.519368    6964 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
I0920 23:06:07.519855    6964 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for listenerconfigs.k3s.cattle.io
DEBU[2019-09-20T23:06:07.519901045-04:00] GET /registry/minions/pi3, rev=0 => rev=481, kv=true, err=<nil>
W0920 23:06:07.520032    6964 shared_informer.go:365] resyncPeriod 57733041467105 is smaller than resyncCheckPeriod 68255585192264 and the informer has already started. Changing it to 68255585192264
I0920 23:06:07.521083    6964 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts
W0920 23:06:07.521572    6964 shared_informer.go:365] resyncPeriod 45951885265167 is smaller than resyncCheckPeriod 68255585192264 and the informer has already started. Changing it to 68255585192264
I0920 23:06:07.523102    6964 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
I0920 23:06:07.523943    6964 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps
I0920 23:06:07.524544    6964 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch
I0920 23:06:07.525007    6964 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges
I0920 23:06:07.525438    6964 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
I0920 23:06:07.525934    6964 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.extensions
I0920 23:06:07.526997    6964 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps
I0920 23:06:07.576802    6964 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps
I0920 23:06:07.578402    6964 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps
I0920 23:06:07.580232    6964 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io
I0920 23:06:07.583146    6964 server.go:421] Version: v1.15.4-k3s.1
I0920 23:06:07.585366    6964 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
I0920 23:06:07.586631    6964 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
I0920 23:06:07.588000    6964 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for helmcharts.helm.cattle.io
I0920 23:06:07.591520    6964 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
I0920 23:06:07.592568    6964 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.extensions
I0920 23:06:07.593190    6964 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
I0920 23:06:07.593526    6964 controllermanager.go:528] Started "resourcequota"
W0920 23:06:07.594157    6964 options.go:251] unknown 'kubernetes.io' or 'k8s.io' labels specified with --node-labels: [node-role.kubernetes.io/master]
W0920 23:06:07.594581    6964 options.go:252] in 1.16, --node-labels in the 'kubernetes.io' namespace must begin with an allowed prefix (kubelet.kubernetes.io, node.kubernetes.io) or be in the specifically allowed set (beta.kubernetes.io/arch, beta.kubernetes.io/instance-type, beta.kubernetes.io/os, failure-domain.beta.kubernetes.io/region, failure-domain.beta.kubernetes.io/zone, failure-domain.kubernetes.io/region, failure-domain.kubernetes.io/zone, kubernetes.io/arch, kubernetes.io/hostname, kubernetes.io/instance-type, kubernetes.io/os)
I0920 23:06:07.599794    6964 plugins.go:103] No cloud provider specified.
I0920 23:06:07.600673    6964 resource_quota_controller.go:271] Starting resource quota controller
I0920 23:06:07.600859    6964 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
I0920 23:06:07.601678    6964 resource_quota_monitor.go:303] QuotaMonitor running
I0920 23:06:07.697307    6964 flannel.go:89] Determining IP address of default interface
I0920 23:06:07.704806    6964 flannel.go:102] Using interface with name eth0 and address 192.168.0.5
I0920 23:06:07.826310    6964 kube.go:127] Waiting 10m0s for node controller to sync
DEBU[2019-09-20T23:06:07.835682614-04:00] FIXME: Got an status-code for which error does not match any expected type!!!: -1  module=api status_code=-1
DEBU[2019-09-20T23:06:07.837203125-04:00] FIXME: Got an status-code for which error does not match any expected type!!!: -1  module=api status_code=-1
I0920 23:06:07.881399    6964 kube.go:306] Starting kube subnet manager
E0920 23:06:07.882908    6964 machine.go:288] failed to get cache information for node 0: open /sys/devices/system/cpu/cpu0/cache: no such file or directory
DEBU[2019-09-20T23:06:07.902111209-04:00] FIXME: Got an status-code for which error does not match any expected type!!!: -1  module=api status_code=-1
DEBU[2019-09-20T23:06:07.903148286-04:00] FIXME: Got an status-code for which error does not match any expected type!!!: -1  module=api status_code=-1
I0920 23:06:07.904541    6964 server.go:658] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
I0920 23:06:07.907159    6964 container_manager_linux.go:277] container manager verified user specified cgroup-root exists: []
I0920 23:06:07.907330    6964 container_manager_linux.go:282] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/systemd/user.slice/user-0.slice SystemCgroupsName: KubeletCgroupsName:/systemd/user.slice/user-0.slice ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/rancher/k3s/agent/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:false CPUCFSQuotaPeriod:100ms}
I0920 23:06:07.908231    6964 container_manager_linux.go:302] Creating device plugin manager: true
I0920 23:06:07.908615    6964 state_mem.go:36] [cpumanager] initializing new in-memory state store
I0920 23:06:07.917122    6964 state_mem.go:84] [cpumanager] updated default cpuset: ""
I0920 23:06:07.917258    6964 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
DEBU[2019-09-20T23:06:07.917123667-04:00] GET /registry/serviceaccounts/kube-system/replication-controller, rev=0 => rev=481, kv=true, err=<nil>
I0920 23:06:07.917844    6964 kubelet.go:307] Watching apiserver
I0920 23:06:08.230452    6964 kuberuntime_manager.go:205] Container runtime containerd initialized, version: v1.2.8-k3s.1, apiVersion: v1alpha2
I0920 23:06:08.262726    6964 server.go:1080] Started kubelet
I0920 23:06:08.292963    6964 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
I0920 23:06:08.293147    6964 status_manager.go:152] Starting to sync pod status with apiserver
I0920 23:06:08.293250    6964 kubelet.go:1809] Starting kubelet main sync loop.
I0920 23:06:08.293334    6964 kubelet.go:1826] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0920 23:06:08.293833    6964 server.go:144] Starting to listen on 0.0.0.0:10250
I0920 23:06:08.308278    6964 volume_manager.go:243] Starting Kubelet Volume Manager
I0920 23:06:08.340889    6964 desired_state_of_world_populator.go:130] Desired state populator starts to run
I0920 23:06:08.342164    6964 server.go:350] Adding debug handlers to kubelet server.
E0920 23:06:08.346535    6964 cri_stats_provider.go:375] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache.
E0920 23:06:08.346731    6964 kubelet.go:1294] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
I0920 23:06:08.412733    6964 kuberuntime_manager.go:928] updating runtime config through cri with podcidr 10.42.0.0/24
I0920 23:06:08.413824    6964 kubelet.go:1826] skipping pod synchronization - container runtime status check may not have completed yet
I0920 23:06:08.414020    6964 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
DEBU[2019-09-20T23:06:08.417098929-04:00] FIXME: Got an status-code for which error does not match any expected type!!!: -1  module=api status_code=-1
DEBU[2019-09-20T23:06:08.417579551-04:00] FIXME: Got an status-code for which error does not match any expected type!!!: -1  module=api status_code=-1
I0920 23:06:08.417957    6964 controllermanager.go:528] Started "replicationcontroller"
I0920 23:06:08.418534    6964 replica_set.go:182] Starting replicationcontroller controller
I0920 23:06:08.418595    6964 controller_utils.go:1029] Waiting for caches to sync for ReplicationController controller
I0920 23:06:08.421635    6964 kubelet_network.go:77] Setting Pod CIDR:  -> 10.42.0.0/24
I0920 23:06:08.431607    6964 kubelet_node_status.go:72] Attempting to register node pi3
DEBU[2019-09-20T23:06:08.465561386-04:00] FIXME: Got an status-code for which error does not match any expected type!!!: -1  module=api status_code=-1
W0920 23:06:08.468611    6964 nvidia.go:66] Error reading "/sys/bus/pci/devices/": open /sys/bus/pci/devices/: no such file or directory
I0920 23:06:08.652937    6964 kubelet.go:1826] skipping pod synchronization - container runtime status check may not have completed yet
DEBU[2019-09-20T23:06:08.777042305-04:00] GET /registry/serviceaccounts/kube-system/service-account-controller, rev=0 => rev=481, kv=true, err=<nil>
DEBU[2019-09-20T23:06:08.835144494-04:00] CREATE /registry/events/default/pi3.15c6552c245f1ad6, size=224, lease=3660 => rev=482, err=<nil>
DEBU[2019-09-20T23:06:08.876412257-04:00] TRIGGERED /registry/events/default/pi3.15c6552c245f1ad6, revision=482, delete=false
I0920 23:06:08.979958    6964 kube.go:134] Node controller sync successful
I0920 23:06:08.981109    6964 vxlan.go:120] VXLAN config: VNI=1 Port=0 GBP=false DirectRouting=false
DEBU[2019-09-20T23:06:08.999978047-04:00] GET /registry/leases/kube-node-lease/pi3, rev=0 => rev=482, kv=true, err=<nil>
I0920 23:06:09.055278    6964 kubelet.go:1826] skipping pod synchronization - container runtime status check may not have completed yet
W0920 23:06:09.346737    6964 admission.go:361] node "pi3" added disallowed labels on node creation: node-role.kubernetes.io/master
DEBU[2019-09-20T23:06:09.408314818-04:00] CREATE /registry/minions/pi3, size=1383, lease=0 => rev=482, err=key exists
DEBU[2019-09-20T23:06:09.421008281-04:00] DELETE /registry/masterleases/192.168.0.5, rev=478 => rev=482, kv=true, deleted=false, err=<nil>
DEBU[2019-09-20T23:06:09.466419247-04:00] CREATE /registry/events/default/pi3.15c6552c2965ad9b, size=257, lease=3660 => rev=483, err=<nil>
DEBU[2019-09-20T23:06:09.511275425-04:00] TRIGGERED /registry/events/default/pi3.15c6552c2965ad9b, revision=483, delete=false
I0920 23:06:09.526905    6964 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
DEBU[2019-09-20T23:06:09.533873669-04:00] GET /registry/minions/pi3, rev=0 => rev=483, kv=true, err=<nil>
DEBU[2019-09-20T23:06:09.541064978-04:00] GET /registry/minions/pi3, rev=0 => rev=483, kv=true, err=<nil>
DEBU[2019-09-20T23:06:09.546273748-04:00] UPDATE /registry/leases/kube-node-lease/pi3, value=209, rev=472, lease=0 => rev=484, kv=true, updated=true, err=<nil>
DEBU[2019-09-20T23:06:09.554707029-04:00] UPDATE /registry/minions/pi3, value=1649, rev=475, lease=0 => rev=485, kv=true, updated=true, err=<nil>
DEBU[2019-09-20T23:06:09.558388985-04:00] TRIGGERED /registry/leases/kube-node-lease/pi3, revision=484, delete=false
DEBU[2019-09-20T23:06:09.558562734-04:00] TRIGGERED /registry/minions/pi3, revision=485, delete=false
DEBU[2019-09-20T23:06:09.559466113-04:00] WATCH READ id=22, key=/registry/leases/kube-node-lease/pi3, revision=484
DEBU[2019-09-20T23:06:09.560177932-04:00] WATCH READ id=11, key=/registry/minions/pi3, revision=485
DEBU[2019-09-20T23:06:09.560549544-04:00] WATCH READ id=23, key=/registry/leases/kube-node-lease/pi3, revision=484
I0920 23:06:09.571464    6964 server_others.go:144] Using iptables Proxier.
I0920 23:06:09.604071    6964 controllermanager.go:528] Started "serviceaccount"
I0920 23:06:09.605031    6964 serviceaccounts_controller.go:117] Starting service account controller
I0920 23:06:09.605136    6964 controller_utils.go:1029] Waiting for caches to sync for service account controller
I0920 23:06:09.622480    6964 server.go:534] Version: v1.15.4-k3s.1
DEBU[2019-09-20T23:06:09.705536503-04:00] CREATE /registry/events/default/pi3.15c6552c2e734f01, size=269, lease=3660 => rev=486, err=<nil>
DEBU[2019-09-20T23:06:09.709085179-04:00] TRIGGERED /registry/events/default/pi3.15c6552c2e734f01, revision=486, delete=false
I0920 23:06:09.720979    6964 conntrack.go:52] Setting nf_conntrack_max to 131072
I0920 23:06:09.724770    6964 config.go:187] Starting service config controller
I0920 23:06:09.724892    6964 controller_utils.go:1029] Waiting for caches to sync for service config controller
I0920 23:06:09.725073    6964 config.go:96] Starting endpoints config controller
I0920 23:06:09.725138    6964 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
DEBU[2019-09-20T23:06:09.750278619-04:00] GET /registry/minions/pi3, rev=0 => rev=486, kv=true, err=<nil>
I0920 23:06:09.752112    6964 flannel.go:75] Wrote subnet file to /run/flannel/subnet.env
I0920 23:06:09.752225    6964 flannel.go:79] Running backend.
I0920 23:06:09.752300    6964 vxlan_network.go:60] watching for new subnet leases
I0920 23:06:09.818640    6964 kubelet_node_status.go:114] Node pi3 was previously registered
I0920 23:06:09.861790    6964 kubelet.go:1826] skipping pod synchronization - container runtime status check may not have completed yet
I0920 23:06:09.899942    6964 kubelet_node_status.go:75] Successfully registered node pi3
DEBU[2019-09-20T23:06:09.969394854-04:00] GET /registry/serviceaccounts/kube-system/generic-garbage-collector, rev=0 => rev=486, kv=true, err=<nil>
DEBU[2019-09-20T23:06:09.995811824-04:00] CREATE /registry/events/default/pi3.15c6552c7b642dea, size=230, lease=3660 => rev=487, err=<nil>
DEBU[2019-09-20T23:06:10.032018265-04:00] TRIGGERED /registry/events/default/pi3.15c6552c7b642dea, revision=487, delete=false
DEBU[2019-09-20T23:06:10.042421637-04:00] CREATE /registry/events/default/pi3.15c6552c2e73cab3, size=265, lease=3660 => rev=488, err=<nil>
DEBU[2019-09-20T23:06:10.083042529-04:00] TRIGGERED /registry/events/default/pi3.15c6552c2e73cab3, revision=488, delete=false
I0920 23:06:10.127002    6964 cpu_manager.go:155] [cpumanager] starting with none policy
I0920 23:06:10.127164    6964 cpu_manager.go:156] [cpumanager] reconciling every 10s
I0920 23:06:10.127261    6964 policy_none.go:42] [cpumanager] none policy: Start
I0920 23:06:10.221278    6964 plugin_manager.go:116] Starting Kubelet Plugin Manager
DEBU[2019-09-20T23:06:10.257741542-04:00] FIXME: Got an status-code for which error does not match any expected type!!!: -1  module=api status_code=-1
DEBU[2019-09-20T23:06:10.258856275-04:00] FIXME: Got an status-code for which error does not match any expected type!!!: -1  module=api status_code=-1
DEBU[2019-09-20T23:06:10.379288074-04:00] CREATE /registry/events/default/pi3.15c6552c2e747c50, size=263, lease=3660 => rev=489, err=<nil>
DEBU[2019-09-20T23:06:10.382579668-04:00] TRIGGERED /registry/events/default/pi3.15c6552c2e747c50, revision=489, delete=false
I0920 23:06:10.484129    6964 controller_utils.go:1036] Caches are synced for service config controller
I0920 23:06:10.485070    6964 controller_utils.go:1036] Caches are synced for endpoints config controller
DEBU[2019-09-20T23:06:10.628026572-04:00] UPDATE /registry/minions/pi3, value=1658, rev=485, lease=0 => rev=490, kv=true, updated=true, err=<nil>
DEBU[2019-09-20T23:06:10.639547802-04:00] TRIGGERED /registry/minions/pi3, revision=490, delete=false
DEBU[2019-09-20T23:06:10.653885213-04:00] WATCH READ id=11, key=/registry/minions/pi3, revision=490
DEBU[2019-09-20T23:06:10.657693678-04:00] CREATE /registry/events/default/pi3.15c6552c942d5f0a, size=264, lease=3660 => rev=491, err=<nil>
DEBU[2019-09-20T23:06:10.704499323-04:00] TRIGGERED /registry/events/default/pi3.15c6552c942d5f0a, revision=491, delete=false
DEBU[2019-09-20T23:06:10.827288400-04:00] CREATE /registry/events/default/pi3.15c6552c9f7ef109, size=241, lease=3660 => rev=492, err=<nil>
DEBU[2019-09-20T23:06:10.837398233-04:00] TRIGGERED /registry/events/default/pi3.15c6552c9f7ef109, revision=492, delete=false
DEBU[2019-09-20T23:06:10.844449386-04:00] FIXME: Got an status-code for which error does not match any expected type!!!: -1  module=api status_code=-1
DEBU[2019-09-20T23:06:10.845879325-04:00] FIXME: Got an status-code for which error does not match any expected type!!!: -1  module=api status_code=-1
DEBU[2019-09-20T23:06:10.853991098-04:00] GET /registry/serviceaccounts/kube-system/generic-garbage-collector, rev=0 => rev=492, kv=true, err=<nil>
W0920 23:06:11.574610    6964 pod_container_deletor.go:75] Container "e4eba9a9785fbef0d330bd4b1e253214e05f896a4fc5adfe22e1118a2052e21c" not found in pod's containers
W0920 23:06:11.575082    6964 pod_container_deletor.go:75] Container "b716d3b9dc617c828c94324926904854253228beebcca54e65c2e362d02e1ac2" not found in pod's containers
I0920 23:06:11.607289    6964 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/67a19a26-3d51-4449-906e-68bd60deb48e-config-volume") pod "coredns-66f496764-7p5zb" (UID: "67a19a26-3d51-4449-906e-68bd60deb48e")
I0920 23:06:11.608713    6964 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-kjbhj" (UniqueName: "kubernetes.io/secret/67a19a26-3d51-4449-906e-68bd60deb48e-coredns-token-kjbhj") pod "coredns-66f496764-7p5zb" (UID: "67a19a26-3d51-4449-906e-68bd60deb48e")
I0920 23:06:11.611509    6964 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "helm-traefik-token-mtsmf" (UniqueName: "kubernetes.io/secret/2258bc2d-dfc1-41e8-b47a-6e7544ffbdb6-helm-traefik-token-mtsmf") pod "helm-install-traefik-qkh25" (UID: "2258bc2d-dfc1-41e8-b47a-6e7544ffbdb6")
DEBU[2019-09-20T23:06:11.694682355-04:00] GET /registry/pods/kube-system/helm-install-traefik-qkh25, rev=0 => rev=492, kv=true, err=<nil>
I0920 23:06:11.754623    6964 reconciler.go:150] Reconciler: start to sync state
DEBU[2019-09-20T23:06:12.045109176-04:00] GET /registry/pods/kube-system/helm-install-traefik-qkh25, rev=0 => rev=492, kv=true, err=<nil>
DEBU[2019-09-20T23:06:12.115249050-04:00] GET /registry/pods/kube-system/coredns-66f496764-7p5zb, rev=0 => rev=492, kv=true, err=<nil>
DEBU[2019-09-20T23:06:12.264361712-04:00] GET /registry/pods/kube-system/coredns-66f496764-7p5zb, rev=0 => rev=492, kv=true, err=<nil>
DEBU[2019-09-20T23:06:12.332389152-04:00] Wrote ping
DEBU[2019-09-20T23:06:13.312615006-04:00] GET /registry/namespaces/default, rev=0 => rev=492, kv=true, err=<nil>
DEBU[2019-09-20T23:06:13.330618800-04:00] GET /registry/services/specs/default/kubernetes, rev=0 => rev=492, kv=true, err=<nil>
DEBU[2019-09-20T23:06:13.366298525-04:00] GET /registry/masterleases/192.168.0.5, rev=0 => rev=492, kv=true, err=<nil>
DEBU[2019-09-20T23:06:13.386015225-04:00] UPDATE /registry/masterleases/192.168.0.5, value=66, rev=481, lease=15 => rev=493, kv=true, updated=true, err=<nil>
DEBU[2019-09-20T23:06:13.390458113-04:00] TRIGGERED /registry/masterleases/192.168.0.5, revision=493, delete=false
DEBU[2019-09-20T23:06:13.406816605-04:00] GET /registry/services/endpoints/default/kubernetes, rev=0 => rev=493, kv=true, err=<nil>
DEBU[2019-09-20T23:06:13.427374185-04:00] LIST /registry/masterleases/, start=/registry/masterleases/, limit=0, rev=0 => rev=493, kvs=1, err=<nil>
DEBU[2019-09-20T23:06:13.444374599-04:00] CREATE /registry/events/kube-system/coredns-66f496764-7p5zb.15c6552d52b38531, size=359, lease=3660 => rev=494, err=<nil>
DEBU[2019-09-20T23:06:13.447835828-04:00] TRIGGERED /registry/events/kube-system/coredns-66f496764-7p5zb.15c6552d52b38531, revision=494, delete=false
DEBU[2019-09-20T23:06:13.542911432-04:00] CREATE /registry/events/kube-system/helm-install-traefik-qkh25.15c6552d5a4f9d15, size=368, lease=3660 => rev=495, err=<nil>
DEBU[2019-09-20T23:06:13.546566669-04:00] TRIGGERED /registry/events/kube-system/helm-install-traefik-qkh25.15c6552d5a4f9d15, revision=495, delete=false
I0920 23:06:14.392725    6964 controllermanager.go:528] Started "garbagecollector"
I0920 23:06:14.394958    6964 garbagecollector.go:128] Starting garbage collector controller
I0920 23:06:14.395920    6964 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
I0920 23:06:14.396632    6964 graph_builder.go:280] GraphBuilder running
DEBU[2019-09-20T23:06:14.518793720-04:00] GET /registry/serviceaccounts/kube-system/daemon-set-controller, rev=0 => rev=495, kv=true, err=<nil>
I0920 23:06:14.706373    6964 controllermanager.go:528] Started "daemonset"
I0920 23:06:14.706457    6964 daemon_controller.go:267] Starting daemon sets controller
I0920 23:06:14.707338    6964 controller_utils.go:1029] Waiting for caches to sync for daemon sets controller
DEBU[2019-09-20T23:06:14.793792158-04:00] GET /registry/serviceaccounts/kube-system/statefulset-controller, rev=0 => rev=495, kv=true, err=<nil>
DEBU[2019-09-20T23:06:14.923767700-04:00] DELETE /registry/masterleases/192.168.0.5, rev=479 => rev=495, kv=true, deleted=false, err=<nil>
DEBU[2019-09-20T23:06:14.963169745-04:00] DELETE /registry/masterleases/192.168.0.5, rev=480 => rev=495, kv=true, deleted=false, err=<nil>
I0920 23:06:15.000702    6964 controllermanager.go:528] Started "statefulset"
I0920 23:06:15.000794    6964 stateful_set.go:145] Starting stateful set controller
I0920 23:06:15.001131    6964 controller_utils.go:1029] Waiting for caches to sync for stateful set controller
DEBU[2019-09-20T23:06:15.076321538-04:00] GET /registry/serviceaccounts/kube-system/expand-controller, rev=0 => rev=495, kv=true, err=<nil>
I0920 23:06:15.252611    6964 controllermanager.go:528] Started "persistentvolume-expander"
I0920 23:06:15.252697    6964 expand_controller.go:300] Starting expand controller
I0920 23:06:15.253021    6964 controller_utils.go:1029] Waiting for caches to sync for expand controller
DEBU[2019-09-20T23:06:15.328609701-04:00] GET /registry/serviceaccounts/kube-system/endpoint-controller, rev=0 => rev=495, kv=true, err=<nil>
I0920 23:06:15.512284    6964 controllermanager.go:528] Started "endpoint"
I0920 23:06:15.513293    6964 endpoints_controller.go:166] Starting endpoint controller
I0920 23:06:15.513394    6964 controller_utils.go:1029] Waiting for caches to sync for endpoint controller
DEBU[2019-09-20T23:06:15.587000587-04:00] GET /registry/serviceaccounts/kube-system/disruption-controller, rev=0 => rev=495, kv=true, err=<nil>
DEBU[2019-09-20T23:06:15.838199642-04:00] GET /registry/serviceaccounts/kube-system/disruption-controller, rev=0 => rev=495, kv=true, err=<nil>
I0920 23:06:15.993029    6964 controllermanager.go:528] Started "disruption"
I0920 23:06:15.993078    6964 disruption.go:333] Starting disruption controller
I0920 23:06:15.993176    6964 controller_utils.go:1029] Waiting for caches to sync for disruption controller
DEBU[2019-09-20T23:06:16.029340011-04:00] GET /registry/serviceaccounts/kube-system/node-controller, rev=0 => rev=495, kv=true, err=<nil>
I0920 23:06:16.123582    6964 node_lifecycle_controller.go:290] Sending events to api server.
I0920 23:06:16.125540    6964 node_lifecycle_controller.go:323] Controller is using taint based evictions.
I0920 23:06:16.126135    6964 taint_manager.go:175] Sending events to api server.
I0920 23:06:16.129689    6964 node_lifecycle_controller.go:388] Controller will reconcile labels.
I0920 23:06:16.130235    6964 node_lifecycle_controller.go:401] Controller will taint node by condition.
I0920 23:06:16.130811    6964 node_lifecycle_controller.go:425] Starting node controller
I0920 23:06:16.131012    6964 controller_utils.go:1029] Waiting for caches to sync for taint controller
I0920 23:06:16.131268    6964 controllermanager.go:528] Started "nodelifecycle"
DEBU[2019-09-20T23:06:16.170562618-04:00] GET /registry/serviceaccounts/kube-system/deployment-controller, rev=0 => rev=495, kv=true, err=<nil>
I0920 23:06:16.282422    6964 controllermanager.go:528] Started "deployment"
W0920 23:06:16.282539    6964 controllermanager.go:520] Skipping "root-ca-cert-publisher"
W0920 23:06:16.282584    6964 controllermanager.go:507] "tokencleaner" is disabled
I0920 23:06:16.283127    6964 deployment_controller.go:152] Starting deployment controller
I0920 23:06:16.283171    6964 controller_utils.go:1029] Waiting for caches to sync for deployment controller
DEBU[2019-09-20T23:06:16.320876106-04:00] GET /registry/serviceaccounts/kube-system/horizontal-pod-autoscaler, rev=0 => rev=495, kv=true, err=<nil>
DEBU[2019-09-20T23:06:16.450384411-04:00] GET /registry/serviceaccounts/kube-system/horizontal-pod-autoscaler, rev=0 => rev=495, kv=true, err=<nil>
DEBU[2019-09-20T23:06:16.582347753-04:00] GET /registry/serviceaccounts/kube-system/horizontal-pod-autoscaler, rev=0 => rev=495, kv=true, err=<nil>
DEBU[2019-09-20T23:06:16.707747126-04:00] GET /registry/serviceaccounts/kube-system/horizontal-pod-autoscaler, rev=0 => rev=495, kv=true, err=<nil>
I0920 23:06:16.821825    6964 controllermanager.go:528] Started "horizontalpodautoscaling"
I0920 23:06:16.822572    6964 horizontal.go:156] Starting HPA controller
I0920 23:06:16.822640    6964 controller_utils.go:1029] Waiting for caches to sync for HPA controller
DEBU[2019-09-20T23:06:16.865121299-04:00] GET /registry/serviceaccounts/kube-system/node-controller, rev=0 => rev=495, kv=true, err=<nil>
I0920 23:06:17.007951    6964 node_lifecycle_controller.go:77] Sending events to api server
E0920 23:06:17.008228    6964 core.go:160] failed to start cloud node lifecycle controller: no cloud provider provided
W0920 23:06:17.008337    6964 controllermanager.go:520] Skipping "cloud-node-lifecycle"
DEBU[2019-09-20T23:06:17.045088300-04:00] GET /registry/serviceaccounts/kube-system/persistent-volume-binder, rev=0 => rev=495, kv=true, err=<nil>
I0920 23:06:17.140339    6964 controllermanager.go:528] Started "persistentvolume-binder"
I0920 23:06:17.140414    6964 pv_controller_base.go:282] Starting persistent volume controller
I0920 23:06:17.140586    6964 controller_utils.go:1029] Waiting for caches to sync for persistent volume controller
DEBU[2019-09-20T23:06:17.176280709-04:00] GET /registry/serviceaccounts/kube-system/pv-protection-controller, rev=0 => rev=495, kv=true, err=<nil>
I0920 23:06:17.271355    6964 controllermanager.go:528] Started "pv-protection"
I0920 23:06:17.271421    6964 pv_protection_controller.go:82] Starting PV protection controller
I0920 23:06:17.271465    6964 controller_utils.go:1029] Waiting for caches to sync for PV protection controller
DEBU[2019-09-20T23:06:17.314042817-04:00] GET /registry/serviceaccounts/kube-system/replicaset-controller, rev=0 => rev=495, kv=true, err=<nil>
DEBU[2019-09-20T23:06:17.326726384-04:00] Wrote ping
I0920 23:06:17.527792    6964 controllermanager.go:528] Started "replicaset"
I0920 23:06:17.528598    6964 replica_set.go:182] Starting replicaset controller
I0920 23:06:17.528747    6964 controller_utils.go:1029] Waiting for caches to sync for ReplicaSet controller
DEBU[2019-09-20T23:06:17.637187934-04:00] GET /registry/serviceaccounts/kube-system/certificate-controller, rev=0 => rev=495, kv=true, err=<nil>
E0920 23:06:17.926254    6964 prometheus.go:55] failed to register depth metric certificate: duplicate metrics collector registration attempted
E0920 23:06:17.926554    6964 prometheus.go:68] failed to register adds metric certificate: duplicate metrics collector registration attempted
E0920 23:06:17.926813    6964 prometheus.go:82] failed to register latency metric certificate: duplicate metrics collector registration attempted
E0920 23:06:17.927067    6964 prometheus.go:96] failed to register workDuration metric certificate: duplicate metrics collector registration attempted
E0920 23:06:17.927388    6964 prometheus.go:112] failed to register unfinished metric certificate: duplicate metrics collector registration attempted
E0920 23:06:17.927621    6964 prometheus.go:126] failed to register unfinished metric certificate: duplicate metrics collector registration attempted
E0920 23:06:17.927869    6964 prometheus.go:152] failed to register depth metric certificate: duplicate metrics collector registration attempted
E0920 23:06:17.928071    6964 prometheus.go:164] failed to register adds metric certificate: duplicate metrics collector registration attempted
E0920 23:06:17.928687    6964 prometheus.go:176] failed to register latency metric certificate: duplicate metrics collector registration attempted
E0920 23:06:17.931035    6964 prometheus.go:188] failed to register work_duration metric certificate: duplicate metrics collector registration attempted
E0920 23:06:17.932118    6964 prometheus.go:203] failed to register unfinished_work_seconds metric certificate: duplicate metrics collector registration attempted
E0920 23:06:17.932455    6964 prometheus.go:216] failed to register longest_running_processor_microseconds metric certificate: duplicate metrics collector registration attempted
E0920 23:06:17.933070    6964 prometheus.go:139] failed to register retries metric certificate: duplicate metrics collector registration attempted
E0920 23:06:17.933315    6964 prometheus.go:228] failed to register retries metric certificate: duplicate metrics collector registration attempted
I0920 23:06:17.933866    6964 controllermanager.go:528] Started "csrapproving"
W0920 23:06:17.933976    6964 controllermanager.go:507] "bootstrapsigner" is disabled
I0920 23:06:17.934825    6964 certificate_controller.go:113] Starting certificate controller
I0920 23:06:17.934918    6964 controller_utils.go:1029] Waiting for caches to sync for certificate controller
DEBU[2019-09-20T23:06:18.023040419-04:00] GET /registry/serviceaccounts/kube-system/node-controller, rev=0 => rev=495, kv=true, err=<nil>
I0920 23:06:18.239911    6964 node_ipam_controller.go:93] Sending events to api server.
DEBU[2019-09-20T23:06:18.370844391-04:00] DELETE /registry/masterleases/192.168.0.5, rev=481 => rev=495, kv=true, deleted=false, err=<nil>
DEBU[2019-09-20T23:06:18.546642980-04:00] Skipping existing deployment of /var/lib/rancher/k3s/server/manifests/coredns.yaml, check=true, checksum 1b6280664c333faeffce72d97a41c2ba58321018439c9b0e7e7eb2165ac74a96=1b6280664c333faeffce72d97a41c2ba58321018439c9b0e7e7eb2165ac74a96
DEBU[2019-09-20T23:06:18.547020322-04:00] Skipping existing deployment of /var/lib/rancher/k3s/server/manifests/rolebindings.yaml, check=true, checksum 6782759f8d0396f976d00ee4c5ca8a8dc95f31807fc308689037a4c2f7359555=6782759f8d0396f976d00ee4c5ca8a8dc95f31807fc308689037a4c2f7359555
DEBU[2019-09-20T23:06:18.547320268-04:00] Skipping existing deployment of /var/lib/rancher/k3s/server/manifests/traefik.yaml, check=true, checksum 08543bf3c056a1f03afda0d88a6f476a864c2186df49890212d8936e28b3917b=08543bf3c056a1f03afda0d88a6f476a864c2186df49890212d8936e28b3917b
DEBU[2019-09-20T23:06:19.728099557-04:00] GET /registry/leases/kube-node-lease/pi3, rev=0 => rev=495, kv=true, err=<nil>
DEBU[2019-09-20T23:06:19.820798405-04:00] UPDATE /registry/leases/kube-node-lease/pi3, value=210, rev=484, lease=0 => rev=496, kv=true, updated=true, err=<nil>
DEBU[2019-09-20T23:06:19.823918177-04:00] TRIGGERED /registry/leases/kube-node-lease/pi3, revision=496, delete=false
DEBU[2019-09-20T23:06:19.824611975-04:00] WATCH READ id=23, key=/registry/leases/kube-node-lease/pi3, revision=496
DEBU[2019-09-20T23:06:19.824688016-04:00] WATCH READ id=22, key=/registry/leases/kube-node-lease/pi3, revision=496
E0920 23:06:20.644670    6964 remote_runtime.go:495] ListContainerStats with filter &ContainerStatsFilter{Id:,PodSandboxId:,LabelSelector:map[string]string{},} from runtime service failed: rpc error: code = Unavailable desc = transport is closing
E0920 23:06:20.645474    6964 remote_image.go:113] PullImage "coredns/coredns:1.6.3" from image service failed: rpc error: code = Unavailable desc = transport is closing
E0920 23:06:20.645915    6964 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to list pod stats: failed to list all container stats: rpc error: code = Unavailable desc = transport is closing
E0920 23:06:20.646182    6964 kuberuntime_image.go:51] Pull image "coredns/coredns:1.6.3" failed: rpc error: code = Unavailable desc = transport is closing
E0920 23:06:20.646683    6964 kuberuntime_manager.go:779] container start failed: ErrImagePull: rpc error: code = Unavailable desc = transport is closing
E0920 23:06:20.647392    6964 pod_workers.go:190] Error syncing pod 67a19a26-3d51-4449-906e-68bd60deb48e ("coredns-66f496764-7p5zb_kube-system(67a19a26-3d51-4449-906e-68bd60deb48e)"), skipping: failed to "StartContainer" for "coredns" with ErrImagePull: "rpc error: code = Unavailable desc = transport is closing"
E0920 23:06:20.646664    6964 remote_image.go:113] PullImage "rancher/klipper-helm:v0.1.5" from image service failed: rpc error: code = Unavailable desc = transport is closing
E0920 23:06:20.647920    6964 kuberuntime_image.go:51] Pull image "rancher/klipper-helm:v0.1.5" failed: rpc error: code = Unavailable desc = transport is closing
E0920 23:06:20.648382    6964 kuberuntime_manager.go:779] container start failed: ErrImagePull: rpc error: code = Unavailable desc = transport is closing
E0920 23:06:20.649173    6964 pod_workers.go:190] Error syncing pod 2258bc2d-dfc1-41e8-b47a-6e7544ffbdb6 ("helm-install-traefik-qkh25_kube-system(2258bc2d-dfc1-41e8-b47a-6e7544ffbdb6)"), skipping: failed to "StartContainer" for "helm" with ErrImagePull: "rpc error: code = Unavailable desc = transport is closing"
containerd: exit status 2
nickbp commented 5 years ago

I'm also seeing the master crash-looping on Raspbian with containerd: exit status 2 after wiping and installing 0.9.0. Seems to be a regression specific to 0.9.0. Have you tried 0.8.1? After wiping (again) and installing 0.8.1 I had no problems.

I think the failed containerd process is:

root      1387 74.0  1.5 887012 63428 ?        Sl   15:45   0:00 containerd -c /mnt/system/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /mnt/system/k3s/agent/containerd

I'm only seeing crash loops on the master node. The agent nodes meanwhile run fine, except for logging the disconnect when the master crashes.

blake commented 5 years ago

I'm also seeing similar errors as @mzac and @nickbp. Definitely seems to be a regression in 0.9.0.

k0m0d0 commented 5 years ago

Same error on a fresh install of 0.9.0 on Raspbian

radicand commented 5 years ago

I upgraded in-place from 0.8.1 last night and had the same issue, downgraded back to 0.8.1 and everything is groovy again. No need to wipe if you upgraded and have issues, just use the install options to go back to 0.8.1 for now.

mzac commented 5 years ago

This is probably related to #60 and #750

nickbp commented 5 years ago

Ran the server standalone to reproduce the error. One of those tickets mentioned a separate containerd log so I wanted to take a look.

Steps were:

  1. Pick a DATA_DIR location, in this case I used /mnt/system/k3s9.
  2. Download k3s-armhf 0.9.0 binary to $DATA_DIR/bin/k3s: curl -L -o k3s https://github.com/rancher/k3s/releases/download/v0.9.0/k3s-armhf
  3. Run the server: K3S_NODE_NAME=k3s9 $DATA_DIR/bin/k3s server --data-dir $DATA_DIR
  4. Run run.sh and observe that the server process exits with containerd: exit status 2
  5. Look at the content of $DATA_DIR/agent/containerd/containerd.log and find the following:
    
    time="2019-09-22T08:15:58.843010068+12:00" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/coredns/coredns:1.6.3,Labels:map[string]string{io.cri-containerd.image: managed,},}"
    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x1 pc=0x24af7e0]

goroutine 530 [running]: github.com/rancher/k3s/vendor/github.com/containerd/cgroups.(memoryController).Stat(0x64caa20, 0x6484f80, 0x7c, 0x6550660, 0x0, 0x0) /go/src/github.com/rancher/k3s/vendor/github.com/containerd/cgroups/memory.go:162 +0x3a0 github.com/rancher/k3s/vendor/github.com/containerd/cgroups.(cgroup).Stat.func1(0x64feb14, 0xa6c286d8, 0x64caa20, 0x6484f80, 0x7c, 0x6550660, 0x6967a40, 0x5e69580) /go/src/github.com/rancher/k3s/vendor/github.com/containerd/cgroups/cgroup.go:277 +0x60 created by github.com/rancher/k3s/vendor/github.com/containerd/cgroups.(*cgroup).Stat /go/src/github.com/rancher/k3s/vendor/github.com/containerd/cgroups/cgroup.go:275 +0x2ec


Attached logs:
- k3s server process with containerd error: [k3s-0.9.0.log](https://github.com/rancher/k3s/files/3639054/k3s-0.9.0.log)
- containerd process with segfault on cgroup Stat call: 
[containerd-0.9.0.log](https://github.com/rancher/k3s/files/3639057/containerd-0.9.0.log)

System info:

root@pi-04:~# uname -a Linux pi-04 4.19.73-v7l+ #1267 SMP Fri Sep 20 18:12:09 BST 2019 armv7l GNU/Linux

root@pi-04:~# cat /etc/os-release PRETTY_NAME="Raspbian GNU/Linux 10 (buster)" NAME="Raspbian GNU/Linux" VERSION_ID="10" VERSION="10 (buster)" VERSION_CODENAME=buster ID=raspbian ID_LIKE=debian HOME_URL="http://www.raspbian.org/" SUPPORT_URL="http://www.raspbian.org/RaspbianForums" BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"

edit to add:

root@pi-04:~# lsmod Module Size Used by xt_multiport 16384 1 veth 24576 0 nf_conntrack_netlink 40960 0 xt_nat 16384 10 ipt_REJECT 16384 0 nf_reject_ipv4 16384 1 ipt_REJECT xt_tcpudp 16384 38 xt_addrtype 16384 3 nft_chain_nat_ipv6 16384 4 nf_nat_ipv6 20480 1 nft_chain_nat_ipv6 xt_conntrack 16384 6 nf_tables 122880 1 nft_chain_nat_ipv6 nfnetlink 16384 2 nf_conntrack_netlink,nf_tables ipt_MASQUERADE 16384 6 xt_comment 16384 65 iptable_filter 16384 1 xt_mark 16384 7 iptable_nat 16384 2 nf_nat_ipv4 16384 2 ipt_MASQUERADE,iptable_nat nf_nat 36864 3 xt_nat,nf_nat_ipv6,nf_nat_ipv4 vxlan 49152 0 ip6_udp_tunnel 16384 1 vxlan udp_tunnel 16384 1 vxlan overlay 106496 7 ip_vs_wrr 16384 0 ip_vs_sh 16384 0 ip_vs_rr 16384 0 ip_vs 143360 6 ip_vs_wrr,ip_vs_rr,ip_vs_sh nf_conntrack 135168 8 ip_vs,xt_nat,ipt_MASQUERADE,nf_conntrack_netlink,nf_nat_ipv6,xt_conntrack,nf_nat_ipv4,nf_nat nf_defrag_ipv6 20480 1 nf_conntrack nf_defrag_ipv4 16384 1 nf_conntrack sha256_generic 20480 0 cfg80211 614400 0 rfkill 28672 2 cfg80211 8021q 32768 0 garp 16384 1 8021q br_netfilter 24576 0 bridge 135168 1 br_netfilter stp 16384 2 garp,bridge llc 16384 3 garp,bridge,stp btrfs 1294336 2 xor 16384 1 btrfs xor_neon 16384 1 xor zstd_decompress 73728 1 btrfs zstd_compress 188416 1 btrfs xxhash 20480 2 zstd_compress,zstd_decompress lzo_compress 16384 1 btrfs raid6_pq 110592 1 btrfs zlib_deflate 28672 1 btrfs sg 28672 0 bcm2835_codec 36864 0 bcm2835_v4l2 45056 0 v4l2_mem2mem 24576 1 bcm2835_codec bcm2835_mmal_vchiq 32768 2 bcm2835_codec,bcm2835_v4l2 v4l2_common 16384 1 bcm2835_v4l2 videobuf2_dma_contig 20480 1 bcm2835_codec videobuf2_vmalloc 16384 1 bcm2835_v4l2 videobuf2_memops 16384 2 videobuf2_dma_contig,videobuf2_vmalloc videobuf2_v4l2 24576 3 bcm2835_codec,bcm2835_v4l2,v4l2_mem2mem raspberrypi_hwmon 16384 0 videobuf2_common 45056 4 bcm2835_codec,bcm2835_v4l2,v4l2_mem2mem,videobuf2_v4l2 hwmon 16384 1 raspberrypi_hwmon videodev 200704 6 bcm2835_codec,v4l2_common,videobuf2_common,bcm2835_v4l2,v4l2_mem2mem,videobuf2_v4l2 media 36864 3 bcm2835_codec,videodev,v4l2_mem2mem vc_sm_cma 36864 1 bcm2835_mmal_vchiq rpivid_mem 16384 0 uio_pdrv_genirq 16384 0 uio 20480 1 uio_pdrv_genirq fixed 16384 0 ip_tables 24576 2 iptable_filter,iptable_nat x_tables 32768 11 xt_comment,xt_multiport,ipt_REJECT,xt_nat,ip_tables,iptable_filter,xt_mark,xt_tcpudp,ipt_MASQUERADE,xt_addrtype,xt_conntrack ipv6 450560 76 nf_nat_ipv6,bridge



K3s build info:
- Downloaded from `https://github.com/rancher/k3s/releases/download/v0.9.0/k3s-armhf`
- `k3s --version` = `k3s-0.9.0 version v0.9.0 (65d87648)`
- `sha1sum` = `3f2b03c8a82bf72cb8884e796616c8cdd5fb72ef`
nickbp commented 5 years ago

For comparison here is the containerd.log from a successful k3s server startup on 0.8.1: containerd-0.8.1.log

nickbp commented 5 years ago

The error doesn't seem to be specific to launching coredns as might be implied by the above containerd-0.9.0.log. As an experiment I tried deleting server/manifests/coredns.yaml to see what would happen. The same failure occurred again, just at a different time:

time="2019-09-22T10:24:54.752729883+12:00" level=info msg=serving... address=/run/k3s/containerd/containerd.sock
time="2019-09-22T10:24:54.752889509+12:00" level=info msg="containerd successfully booted in 0.093370s"
time="2019-09-22T10:24:54.794766405+12:00" level=warning msg="The image docker.io/coredns/coredns:1.6.3 is not unpacked."
time="2019-09-22T10:24:54.797485818+12:00" level=warning msg="The image docker.io/rancher/klipper-helm:v0.1.5 is not unpacked."
time="2019-09-22T10:24:54.811645964+12:00" level=info msg="Start event monitor"
time="2019-09-22T10:24:54.811795201+12:00" level=info msg="Start snapshots syncer"
time="2019-09-22T10:24:54.811857292+12:00" level=info msg="Start streaming server"
time="2019-09-22T10:24:55.523254023+12:00" level=info msg="No cni config template is specified, wait for other system components to drop the config."
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x1 pc=0x24af7e0]

goroutine 81 [running]:
github.com/rancher/k3s/vendor/github.com/containerd/cgroups.(*memoryController).Stat(0x7ee0e48, 0x7495300, 0x7d, 0x7848140, 0x0, 0x0)
    /go/src/github.com/rancher/k3s/vendor/github.com/containerd/cgroups/memory.go:162 +0x3a0
github.com/rancher/k3s/vendor/github.com/containerd/cgroups.(*cgroup).Stat.func1(0x7cb4470, 0xa6ac7f48, 0x7ee0e48, 0x7495300, 0x7d, 0x7848140, 0x78453d0, 0x7894040)
    /go/src/github.com/rancher/k3s/vendor/github.com/containerd/cgroups/cgroup.go:277 +0x60
created by github.com/rancher/k3s/vendor/github.com/containerd/cgroups.(*cgroup).Stat
    /go/src/github.com/rancher/k3s/vendor/github.com/containerd/cgroups/cgroup.go:275 +0x2ec

Again its worth pointing out that k3s 0.9.0 agents do not seem to have any problem, the issue appears to be specific to the k3s 0.9.0 server.

nickbp commented 5 years ago

I dropped in the 0.9.0-rc2 release of the k3s-armhf binary, then repeated the above steps to run a k3s server. The segfault went away. So it looks like the regression was introduced sometime between rc2 and GA: https://github.com/rancher/k3s/compare/v0.9.0-rc2...v0.9.0

alexellis commented 5 years ago

@erikwilson @ibuildthecloud PTAL

hlugt commented 5 years ago

Be advised that on raspbian when (still) using kernel version 4.19.42-v7 there seems to be no issue. I hyave tried with 66 and 73 (current latest) and k3s will not run.

It seems to be a kernel thing with arm32v7: on my armbian 4.19.62-sunxi k3s also fails. My arm64v8 nodes have no issues with latest kernels.

Edit: hm, not completely sure of this, it seems to work on and off. Have reverted to 0.8.1 to keep working env.

alexellis commented 5 years ago

Users of https://k3sup.dev now have 0.8.0 as the pinned, default version. Try it out today whilst awaiting this fix

ibuildthecloud commented 5 years ago

Thanks for the bug report, we fix this straight away and do a v0.9.1.

mzac commented 5 years ago

@ibuildthecloud Awesome thanks! I was trying to build my first k3s cluster when I ran into this bug... looking forward to trying it out :)

EdMendezNC commented 5 years ago

I had the same problem, thought it was something i was doing wrong. Then i came across this thread. If you don't want to wait till v0.9.1 and don't want to use k3sup, you can use the following command to install with v0.8.1.

curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v0.8.1 sh -

alexellis commented 5 years ago

You can also use https://k3sup.dev which automates the process

REBELinBLUE commented 5 years ago

Uh, there was me thinking I had messed up when upgrading 🤦‍♂️

erikwilson commented 5 years ago

I am not sure exactly why, but it seems that downgrading to grpc 1.13.0 solves the issue. Please give k3s v0.9.1 a try @mzac and let me know if there are still problems.

also adding some panic logs in case they are helpful in the future: arm-panic.log

Ephemera commented 5 years ago

works fine for me

nickbp commented 5 years ago

I've updated to 0.9.1 on the system described earlier and the segfault no longer occurs. I've been able to deploy everything successfully on 0.9.1.

Edit: Forgot to mention: Thanks for finding a workaround!

radicand commented 5 years ago

This is resolved for me as well now, thanks!

davidnuzik commented 5 years ago

We'll be closing this issue now. v0.9.1 release should resolve the problem. I'm happy this release fixes the issue :)

mzac commented 5 years ago

@erikwilson Thanks this seems to fix the issue!

gokalper commented 5 years ago

I am still having this issue, tried with 0.8.1 and 0.9.1 no luck

[INFO]  Using v0.9.1 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v0.9.1/sha256sum-arm.txt
[INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v0.9.1/k3s-armhf
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s
Job for k3s.service failed because the control process exited with error code.
See "systemctl status k3s.service" and "journalctl -xe" for details.
pi@RPI4-1:~/k3s $ systemctl status k3s.service
● k3s.service - Lightweight Kubernetes
   Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: enabled)
   Active: activating (start) since Sat 2019-10-05 15:16:47 PDT; 2s ago
     Docs: https://k3s.io
  Process: 13507 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
  Process: 13509 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
 Main PID: 13511 (k3s-server)
    Tasks: 15
   Memory: 70.1M
   CGroup: /system.slice/k3s.service
           └─13511 /usr/local/bin/k3s server KillMode=process

Oct 05 15:16:47 RPI4-1 k3s[13511]: E1005 15:16:47.895207   13511 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
Oct 05 15:16:47 RPI4-1 k3s[13511]: E1005 15:16:47.895403   13511 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
Oct 05 15:16:47 RPI4-1 k3s[13511]: I1005 15:16:47.895595   13511 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesBy
Oct 05 15:16:47 RPI4-1 k3s[13511]: I1005 15:16:47.895661   13511 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,Validatin
Oct 05 15:16:47 RPI4-1 k3s[13511]: I1005 15:16:47.941334   13511 master.go:233] Using reconciler: lease
Oct 05 15:16:48 RPI4-1 k3s[13511]: W1005 15:16:48.705381   13511 genericapiserver.go:351] Skipping API batch/v2alpha1 because it has no resources.
Oct 05 15:16:48 RPI4-1 k3s[13511]: W1005 15:16:48.732925   13511 genericapiserver.go:351] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Oct 05 15:16:48 RPI4-1 k3s[13511]: W1005 15:16:48.748523   13511 genericapiserver.go:351] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Oct 05 15:16:48 RPI4-1 k3s[13511]: W1005 15:16:48.752262   13511 genericapiserver.go:351] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Oct 05 15:16:48 RPI4-1 k3s[13511]: W1005 15:16:48.761300   13511 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
davidnuzik commented 5 years ago

Reference https://github.com/rancher/k3s/issues/869 (new issue created) for the last comment ^