k3s-io / k3s

Lightweight Kubernetes
https://k3s.io
Apache License 2.0
27.82k stars 2.33k forks source link

k3s wrangler exits when busy #587

Closed maltejk closed 3 years ago

maltejk commented 5 years ago

Describe the bug k3s exits when a lot of pods are busy, e.g when gitlab is updated

To Reproduce Steps to reproduce the behavior: start k3s, start some activities, wait

Expected behavior k3s keeps running

Additional context

journalctl -u k3s -f ends with

Jun 28 18:37:39 k3s k3s[30703]: E0628 18:36:54.870908   30703 kubelet_node_status.go:340] Error updating node status, will retry: error getting node "k3s": Get https://127.0.0.1:6445/api/v1/nodes/k3s?resourceVersion=0&timeout=10s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Jun 28 18:37:39 k3s k3s[30703]: time="2019-06-28T18:37:39.014193136Z" level=fatal msg="leaderelection lost for k3s"
Jun 28 18:37:39 k3s systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Jun 28 18:37:39 k3s systemd[1]: k3s.service: Failed with result 'exit-code'.

I wonder if it would be possible to raise the 10s timeout?

erikwilson commented 5 years ago

Thanks for reporting! Would it be possible to try with the newest v0.7.0-rc to see if it fixes the issue? I think rc4 specifically could have an impact with the watch cache update. If v0.7.0-rc4 or later does not fix the issue would you mind sharing the complete log file?

maltejk commented 5 years ago

Hi @erikwilson, thank you for looking into it. I switched to v0.7.0 as suggested and also activated logging to file. Unfortunately, the behaviour persists, so I also switched the systemd unit to Restart=always.

Last crash's message was

E0712 23:34:41.754802    6502 runtime.go:69] Observed a panic: &errors.errorString{s:"killing connection/stream because serving request timed out and response had been started"} (killing connection/stream because serving request timed out and response had been started)
/go/src/github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:76
/go/src/github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/go/src/github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/usr/local/go/src/runtime/panic.go:522
/go/src/github.com/rancher/k3s/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:234
/go/src/github.com/rancher/k3s/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:118
/go/src/github.com/rancher/k3s/vendor/k8s.io/apiserver/pkg/server/filters/waitgroup.go:47
/usr/local/go/src/net/http/server.go:1995
/go/src/github.com/rancher/k3s/vendor/k8s.io/apiserver/pkg/endpoints/filters/requestinfo.go:39
/usr/local/go/src/net/http/server.go:1995
/go/src/github.com/rancher/k3s/vendor/k8s.io/apiserver/pkg/server/filters/wrap.go:46
/usr/local/go/src/net/http/server.go:1995
/go/src/github.com/rancher/k3s/vendor/k8s.io/apiserver/pkg/server/handler.go:189
/go/src/github.com/rancher/k3s/vendor/github.com/gorilla/mux/mux.go:162
/go/src/github.com/rancher/k3s/vendor/github.com/gorilla/mux/mux.go:162
/go/src/github.com/rancher/k3s/vendor/github.com/rancher/dynamiclistener/server.go:493
/usr/local/go/src/net/http/server.go:1995
/usr/local/go/src/net/http/server.go:2774
/usr/local/go/src/net/http/server.go:1878
/usr/local/go/src/runtime/asm_amd64.s:1337
time="2019-07-12T23:35:28.088699879Z" level=fatal msg="leaderelection lost for k3s"
time="2019-07-12T23:35:28.726531488Z" level=info msg="Starting k3s v0.7.0-rc4 (185a8dca)"

I'm happy to provide further information.

Thank you.

freeseacher commented 5 years ago

Get the same behaviour. i'm launching at about 25 pods at my ci, with spin disks. after all containers get downloaded and waiting for unpack i have LA at about 15 per core and high disk pressure. I am launching k3s in docker compose without any limits here is logs from docker-compose service

Attaching to master-62809_server_1
server_1  | time="2019-07-30T06:45:01.910020479Z" level=info msg="Starting k3s v0.7.0 (61bdd852)"
server_1  | time="2019-07-30T06:45:02.519628893Z" level=info msg="Running kube-apiserver --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --requestheader-allowed-names=system:auth-proxy --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-issuer=k3s --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-username-headers=X-Remote-User --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --secure-port=6444 --enable-admission-plugins=NodeRestriction --service-cluster-ip-range=10.43.0.0/16 --api-audiences=unknown --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --requestheader-group-headers=X-Remote-Group --requestheader-extra-headers-prefix=X-Remote-Extra- --allow-privileged=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --advertise-port=6443 --insecure-port=0 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key"
server_1  | E0730 06:45:03.626959       1 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
server_1  | E0730 06:45:03.633116       1 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
server_1  | E0730 06:45:03.633184       1 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
server_1  | E0730 06:45:03.633233       1 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
server_1  | E0730 06:45:03.633267       1 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
server_1  | E0730 06:45:03.633294       1 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
server_1  | W0730 06:45:03.774713       1 genericapiserver.go:315] Skipping API batch/v2alpha1 because it has no resources.
server_1  | W0730 06:45:03.784102       1 genericapiserver.go:315] Skipping API node.k8s.io/v1alpha1 because it has no resources.
server_1  | E0730 06:45:03.859276       1 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
server_1  | E0730 06:45:03.859549       1 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
server_1  | E0730 06:45:03.859710       1 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
server_1  | E0730 06:45:03.859892       1 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
server_1  | E0730 06:45:03.860032       1 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
server_1  | E0730 06:45:03.860174       1 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
server_1  | time="2019-07-30T06:45:03.894007857Z" level=info msg="Running kube-scheduler --port=10251 --bind-address=127.0.0.1 --secure-port=0 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false"
server_1  | time="2019-07-30T06:45:03.903025201Z" level=info msg="Running kube-controller-manager --port=10252 --cluster-cidr=10.42.0.0/16 --bind-address=127.0.0.1 --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --leader-elect=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --allocate-node-cidrs=true --secure-port=0 --use-service-account-credentials=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key"
server_1  | W0730 06:45:03.939094       1 authorization.go:47] Authorization is disabled
server_1  | W0730 06:45:03.939116       1 authentication.go:55] Authentication is disabled
server_1  | time="2019-07-30T06:45:04.062377432Z" level=info msg="Creating CRD listenerconfigs.k3s.cattle.io"
server_1  | time="2019-07-30T06:45:04.074136951Z" level=info msg="Creating CRD addons.k3s.cattle.io"
server_1  | E0730 06:45:04.075372       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
server_1  | E0730 06:45:04.075522       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
server_1  | E0730 06:45:04.075689       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
server_1  | E0730 06:45:04.075854       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
server_1  | E0730 06:45:04.076024       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
server_1  | E0730 06:45:04.076193       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
server_1  | E0730 06:45:04.076404       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
server_1  | E0730 06:45:04.076590       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
server_1  | E0730 06:45:04.076801       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
server_1  | E0730 06:45:04.076988       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
server_1  | time="2019-07-30T06:45:04.082424557Z" level=info msg="Creating CRD helmcharts.helm.cattle.io"
server_1  | E0730 06:45:04.085075       1 controller.go:147] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.43.0.1": cannot allocate resources of type serviceipallocations at this time
server_1  | E0730 06:45:04.085728       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.26.0.2, ResourceVersion: 0, AdditionalErrorMsg: 
server_1  | time="2019-07-30T06:45:04.088686236Z" level=info msg="Waiting for CRD listenerconfigs.k3s.cattle.io to become available"
server_1  | time="2019-07-30T06:45:04.591453719Z" level=info msg="Done waiting for CRD listenerconfigs.k3s.cattle.io to become available"
server_1  | time="2019-07-30T06:45:04.591618028Z" level=info msg="Waiting for CRD addons.k3s.cattle.io to become available"
server_1  | E0730 06:45:05.077337       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
server_1  | E0730 06:45:05.079535       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
server_1  | E0730 06:45:05.084496       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
server_1  | time="2019-07-30T06:45:05.094492479Z" level=info msg="Done waiting for CRD addons.k3s.cattle.io to become available"
server_1  | time="2019-07-30T06:45:05.094590931Z" level=info msg="Waiting for CRD helmcharts.helm.cattle.io to become available"
server_1  | E0730 06:45:05.105396       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
server_1  | E0730 06:45:05.107380       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
server_1  | E0730 06:45:05.109505       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
server_1  | E0730 06:45:05.111386       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
server_1  | E0730 06:45:05.112893       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
server_1  | E0730 06:45:05.114776       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
server_1  | E0730 06:45:05.116177       1 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
server_1  | time="2019-07-30T06:45:05.597515990Z" level=info msg="Done waiting for CRD helmcharts.helm.cattle.io to become available"
server_1  | time="2019-07-30T06:45:05.618991863Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.64.0.tgz"
server_1  | time="2019-07-30T06:45:05.620848905Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
server_1  | time="2019-07-30T06:45:05.621752682Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
server_1  | time="2019-07-30T06:45:05.621963290Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
server_1  | E0730 06:45:05.623057       1 prometheus.go:138] failed to register depth metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_depth" is not a valid metric name
server_1  | E0730 06:45:05.623161       1 prometheus.go:150] failed to register adds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_adds" is not a valid metric name
server_1  | E0730 06:45:05.623416       1 prometheus.go:162] failed to register latency metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=Addon before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_queue_latency" is not a valid metric name
server_1  | E0730 06:45:05.623614       1 prometheus.go:174] failed to register work_duration metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=Addon takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_work_duration" is not a valid metric name
server_1  | E0730 06:45:05.623738       1 prometheus.go:189] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=Addon has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds" is not a valid metric name
server_1  | E0730 06:45:05.623924       1 prometheus.go:202] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=Addon been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds" is not a valid metric name
server_1  | E0730 06:45:05.624114       1 prometheus.go:214] failed to register retries metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_retries" is not a valid metric name
server_1  | time="2019-07-30T06:45:05.630007556Z" level=error msg="Update cert unable to convert string to cert: Unable to split cert into two parts"
server_1  | time="2019-07-30T06:45:05.631193756Z" level=info msg="Listening on server:6443"
server_1  | E0730 06:45:05.640310       1 prometheus.go:138] failed to register depth metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_depth" is not a valid metric name
server_1  | E0730 06:45:05.640379       1 prometheus.go:150] failed to register adds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_adds" is not a valid metric name
server_1  | E0730 06:45:05.640443       1 prometheus.go:162] failed to register latency metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=ListenerConfig before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency" is not a valid metric name
server_1  | E0730 06:45:05.640507       1 prometheus.go:174] failed to register work_duration metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=ListenerConfig takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration" is not a valid metric name
server_1  | E0730 06:45:05.640564       1 prometheus.go:189] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=ListenerConfig has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds" is not a valid metric name
server_1  | E0730 06:45:05.640613       1 prometheus.go:202] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=ListenerConfig been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds" is not a valid metric name
server_1  | E0730 06:45:05.640702       1 prometheus.go:214] failed to register retries metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_retries" is not a valid metric name
server_1  | time="2019-07-30T06:45:06.141450354Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
server_1  | time="2019-07-30T06:45:06.244167185Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/node-token"
server_1  | time="2019-07-30T06:45:06.244211666Z" level=info msg="To join node to cluster: k3s agent -s https://172.26.0.2:6443 -t ${NODE_TOKEN}"
server_1  | time="2019-07-30T06:45:06.244693253Z" level=info msg="Starting k3s.cattle.io/v1, Kind=ListenerConfig controller"
server_1  | E0730 06:45:06.305357       1 prometheus.go:138] failed to register depth metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_depth" is not a valid metric name
server_1  | E0730 06:45:06.305673       1 prometheus.go:150] failed to register adds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_adds" is not a valid metric name
server_1  | E0730 06:45:06.305917       1 prometheus.go:162] failed to register latency metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Node before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_queue_latency" is not a valid metric name
server_1  | E0730 06:45:06.306153       1 prometheus.go:174] failed to register work_duration metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Node takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_work_duration" is not a valid metric name
server_1  | E0730 06:45:06.306392       1 prometheus.go:189] failed to register unfinished_work_seconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Node has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_unfinished_work_seconds" is not a valid metric name
server_1  | E0730 06:45:06.306612       1 prometheus.go:202] failed to register longest_running_processor_microseconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Node been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_longest_running_processor_microseconds" is not a valid metric name
server_1  | E0730 06:45:06.306841       1 prometheus.go:214] failed to register retries metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_retries" is not a valid metric name
server_1  | E0730 06:45:06.307160       1 prometheus.go:138] failed to register depth metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_depth", help: "(Deprecated) Current depth of workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_depth" is not a valid metric name
server_1  | E0730 06:45:06.307346       1 prometheus.go:150] failed to register adds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_adds", help: "(Deprecated) Total number of adds handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_adds" is not a valid metric name
server_1  | E0730 06:45:06.307610       1 prometheus.go:162] failed to register latency metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_queue_latency", help: "(Deprecated) How long an item stays in workqueuebatch/v1, Kind=Job before being requested.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_queue_latency" is not a valid metric name
server_1  | E0730 06:45:06.307875       1 prometheus.go:174] failed to register work_duration metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_work_duration", help: "(Deprecated) How long processing an item from workqueuebatch/v1, Kind=Job takes.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_work_duration" is not a valid metric name
server_1  | E0730 06:45:06.308088       1 prometheus.go:189] failed to register unfinished_work_seconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_unfinished_work_seconds", help: "(Deprecated) How many seconds of work batch/v1, Kind=Job has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_unfinished_work_seconds" is not a valid metric name
server_1  | E0730 06:45:06.308306       1 prometheus.go:202] failed to register longest_running_processor_microseconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for batch/v1, Kind=Job been running.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_longest_running_processor_microseconds" is not a valid metric name
server_1  | E0730 06:45:06.308533       1 prometheus.go:214] failed to register retries metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_retries", help: "(Deprecated) Total number of retries handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_retries" is not a valid metric name
server_1  | E0730 06:45:06.308808       1 prometheus.go:138] failed to register depth metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_depth", help: "(Deprecated) Current depth of workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_depth" is not a valid metric name
server_1  | E0730 06:45:06.308995       1 prometheus.go:150] failed to register adds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_adds", help: "(Deprecated) Total number of adds handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_adds" is not a valid metric name
server_1  | E0730 06:45:06.309258       1 prometheus.go:162] failed to register latency metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_queue_latency", help: "(Deprecated) How long an item stays in workqueuehelm.cattle.io/v1, Kind=HelmChart before being requested.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_queue_latency" is not a valid metric name
server_1  | E0730 06:45:06.309501       1 prometheus.go:174] failed to register work_duration metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_work_duration", help: "(Deprecated) How long processing an item from workqueuehelm.cattle.io/v1, Kind=HelmChart takes.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_work_duration" is not a valid metric name
server_1  | E0730 06:45:06.309725       1 prometheus.go:189] failed to register unfinished_work_seconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds", help: "(Deprecated) How many seconds of work helm.cattle.io/v1, Kind=HelmChart has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds" is not a valid metric name
server_1  | E0730 06:45:06.309937       1 prometheus.go:202] failed to register longest_running_processor_microseconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for helm.cattle.io/v1, Kind=HelmChart been running.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds" is not a valid metric name
server_1  | E0730 06:45:06.310167       1 prometheus.go:214] failed to register retries metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_retries", help: "(Deprecated) Total number of retries handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_retries" is not a valid metric name
server_1  | E0730 06:45:06.310481       1 prometheus.go:138] failed to register depth metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_depth" is not a valid metric name
server_1  | E0730 06:45:06.310699       1 prometheus.go:150] failed to register adds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_adds" is not a valid metric name
server_1  | E0730 06:45:06.310952       1 prometheus.go:162] failed to register latency metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Service before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_queue_latency" is not a valid metric name
server_1  | E0730 06:45:06.311194       1 prometheus.go:174] failed to register work_duration metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Service takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_work_duration" is not a valid metric name
server_1  | E0730 06:45:06.311409       1 prometheus.go:189] failed to register unfinished_work_seconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Service has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_unfinished_work_seconds" is not a valid metric name
server_1  | E0730 06:45:06.311626       1 prometheus.go:202] failed to register longest_running_processor_microseconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Service been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_longest_running_processor_microseconds" is not a valid metric name
server_1  | E0730 06:45:06.311854       1 prometheus.go:214] failed to register retries metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_retries" is not a valid metric name
server_1  | E0730 06:45:06.312129       1 prometheus.go:138] failed to register depth metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_depth" is not a valid metric name
server_1  | E0730 06:45:06.312314       1 prometheus.go:150] failed to register adds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_adds" is not a valid metric name
server_1  | E0730 06:45:06.312587       1 prometheus.go:162] failed to register latency metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Pod before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_queue_latency" is not a valid metric name
server_1  | E0730 06:45:06.312820       1 prometheus.go:174] failed to register work_duration metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Pod takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_work_duration" is not a valid metric name
server_1  | E0730 06:45:06.313032       1 prometheus.go:189] failed to register unfinished_work_seconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Pod has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_unfinished_work_seconds" is not a valid metric name
server_1  | E0730 06:45:06.313218       1 prometheus.go:202] failed to register longest_running_processor_microseconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Pod been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_longest_running_processor_microseconds" is not a valid metric name
server_1  | E0730 06:45:06.313474       1 prometheus.go:214] failed to register retries metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_retries" is not a valid metric name
server_1  | E0730 06:45:06.313747       1 prometheus.go:138] failed to register depth metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_depth" is not a valid metric name
server_1  | E0730 06:45:06.313933       1 prometheus.go:150] failed to register adds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_adds" is not a valid metric name
server_1  | E0730 06:45:06.314200       1 prometheus.go:162] failed to register latency metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Endpoints before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_queue_latency" is not a valid metric name
server_1  | E0730 06:45:06.314451       1 prometheus.go:174] failed to register work_duration metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Endpoints takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_work_duration" is not a valid metric name
server_1  | E0730 06:45:06.314674       1 prometheus.go:189] failed to register unfinished_work_seconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Endpoints has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_unfinished_work_seconds" is not a valid metric name
server_1  | E0730 06:45:06.314880       1 prometheus.go:202] failed to register longest_running_processor_microseconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Endpoints been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_longest_running_processor_microseconds" is not a valid metric name
server_1  | E0730 06:45:06.315110       1 prometheus.go:214] failed to register retries metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_retries" is not a valid metric name
server_1  | time="2019-07-30T06:45:06.368250904Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"
server_1  | time="2019-07-30T06:45:06.368517913Z" level=info msg="Run: k3s kubectl"
server_1  | time="2019-07-30T06:45:06.368709527Z" level=info msg="k3s is up and running"
server_1  | time="2019-07-30T06:45:06.436864632Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
server_1  | time="2019-07-30T06:45:06.437012035Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
server_1  | time="2019-07-30T06:45:06.441491320Z" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory\""
server_1  | time="2019-07-30T06:45:07.317726691Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller"
server_1  | W0730 06:45:07.320595       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [172.26.0.2]
server_1  | time="2019-07-30T06:45:07.444244770Z" level=info msg="module br_netfilter was already loaded"
server_1  | time="2019-07-30T06:45:07.444302655Z" level=warning msg="failed to write value 1 at /proc/sys/net/bridge/bridge-nf-call-iptables: open /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory"
server_1  | time="2019-07-30T06:45:07.449215119Z" level=info msg="Connecting to proxy" url="wss://172.26.0.2:6443/v1-k3s/connect"
server_1  | time="2019-07-30T06:45:07.451191707Z" level=info msg="Handling backend connection request [k3s]"
server_1  | time="2019-07-30T06:45:07.452212167Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"
server_1  | time="2019-07-30T06:45:07.452519419Z" level=info msg="Running kubelet --cgroup-driver=cgroupfs --authentication-token-webhook=true --cert-dir=/var/lib/rancher/k3s/agent/kubelet/pki --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime=remote --healthz-bind-address=127.0.0.1 --read-only-port=0 --serialize-image-pulls=false --cluster-domain=cluster.local --resolv-conf=/tmp/k3s-resolv.conf --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --node-labels=pool=default,node-role.kubernetes.io/master=true --eviction-hard=imagefs.available<5%,nodefs.available<5% --authorization-mode=Webhook --cpu-cfs-quota=false --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --seccomp-profile-root=/var/lib/rancher/k3s/agent/kubelet/seccomp --address=0.0.0.0 --hostname-override=k3s --fail-swap-on=false --root-dir=/var/lib/rancher/k3s/agent/kubelet --cni-bin-dir=/bin --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --anonymous-auth=false --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key --allow-privileged=true --cluster-dns=10.43.0.10"
server_1  | Flag --allow-privileged has been deprecated, will be removed in a future version
server_1  | W0730 06:45:07.453006       1 options.go:265] unknown 'kubernetes.io' or 'k8s.io' labels specified with --node-labels: [node-role.kubernetes.io/master]
server_1  | W0730 06:45:07.454181       1 options.go:266] in 1.16, --node-labels in the 'kubernetes.io' namespace must begin with an allowed prefix (kubelet.kubernetes.io, node.kubernetes.io) or be in the specifically allowed set (beta.kubernetes.io/arch, beta.kubernetes.io/instance-type, beta.kubernetes.io/os, failure-domain.beta.kubernetes.io/region, failure-domain.beta.kubernetes.io/zone, failure-domain.kubernetes.io/region, failure-domain.kubernetes.io/zone, kubernetes.io/arch, kubernetes.io/hostname, kubernetes.io/instance-type, kubernetes.io/os)
server_1  | W0730 06:45:07.455229       1 options.go:265] unknown 'kubernetes.io' or 'k8s.io' labels specified with --node-labels: [node-role.kubernetes.io/master]
server_1  | W0730 06:45:07.455277       1 server.go:216] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
server_1  | W0730 06:45:07.455366       1 options.go:266] in 1.16, --node-labels in the 'kubernetes.io' namespace must begin with an allowed prefix (kubelet.kubernetes.io, node.kubernetes.io) or be in the specifically allowed set (beta.kubernetes.io/arch, beta.kubernetes.io/instance-type, beta.kubernetes.io/os, failure-domain.beta.kubernetes.io/region, failure-domain.beta.kubernetes.io/zone, failure-domain.kubernetes.io/region, failure-domain.kubernetes.io/zone, kubernetes.io/arch, kubernetes.io/hostname, kubernetes.io/instance-type, kubernetes.io/os)
server_1  | W0730 06:45:07.510891       1 info.go:52] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
server_1  | W0730 06:45:07.511297       1 proxier.go:485] Failed to read file /lib/modules/4.19.0-0.bpo.4-amd64/modules.builtin with error open /lib/modules/4.19.0-0.bpo.4-amd64/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
server_1  | W0730 06:45:07.511925       1 proxier.go:498] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
server_1  | W0730 06:45:07.512404       1 proxier.go:498] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
server_1  | W0730 06:45:07.512868       1 proxier.go:498] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
server_1  | W0730 06:45:07.513336       1 proxier.go:498] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
server_1  | W0730 06:45:07.514662       1 proxier.go:498] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
server_1  | time="2019-07-30T06:45:07.529439048Z" level=info msg="waiting for node k3s: nodes \"k3s\" not found"
server_1  | W0730 06:45:07.535944       1 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
server_1  | W0730 06:45:07.551389       1 node.go:113] Failed to retrieve node info: nodes "k3s" not found
server_1  | W0730 06:45:07.551794       1 proxier.go:314] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
server_1  | E0730 06:45:07.555782       1 cri_stats_provider.go:375] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache.
server_1  | E0730 06:45:07.555805       1 kubelet.go:1250] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
server_1  | E0730 06:45:07.620676       1 controller.go:194] failed to get node "k3s" when trying to set owner ref to the node lease: nodes "k3s" not found
server_1  | W0730 06:45:07.641905       1 manager.go:537] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
server_1  | E0730 06:45:07.658046       1 kubelet.go:2207] node "k3s" not found
server_1  | E0730 06:45:07.660174       1 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "k3s" not found
server_1  | W0730 06:45:07.676316       1 admission.go:353] node "k3s" added disallowed labels on node creation: node-role.kubernetes.io/master
server_1  | time="2019-07-30T06:45:07.860605302Z" level=info msg="Starting batch/v1, Kind=Job controller"
server_1  | time="2019-07-30T06:45:08.570218441Z" level=info msg="Starting /v1, Kind=Node controller"
server_1  | time="2019-07-30T06:45:08.575099653Z" level=info msg="Updated coredns node hosts entry [172.26.0.2 k3s]"
server_1  | time="2019-07-30T06:45:08.670356959Z" level=info msg="Starting /v1, Kind=Service controller"
server_1  | time="2019-07-30T06:45:08.770501453Z" level=info msg="Starting /v1, Kind=Pod controller"
server_1  | time="2019-07-30T06:45:08.870644555Z" level=info msg="Starting /v1, Kind=Endpoints controller"
server_1  | time="2019-07-30T06:45:09.531147602Z" level=info msg="waiting for node k3s CIDR not assigned yet"
server_1  | E0730 06:45:11.276631       1 prometheus.go:138] failed to register depth metric certificate: duplicate metrics collector registration attempted
server_1  | E0730 06:45:11.276705       1 prometheus.go:150] failed to register adds metric certificate: duplicate metrics collector registration attempted
server_1  | E0730 06:45:11.276769       1 prometheus.go:162] failed to register latency metric certificate: duplicate metrics collector registration attempted
server_1  | E0730 06:45:11.276835       1 prometheus.go:174] failed to register work_duration metric certificate: duplicate metrics collector registration attempted
server_1  | E0730 06:45:11.276891       1 prometheus.go:189] failed to register unfinished_work_seconds metric certificate: duplicate metrics collector registration attempted
server_1  | E0730 06:45:11.276939       1 prometheus.go:202] failed to register longest_running_processor_microseconds metric certificate: duplicate metrics collector registration attempted
server_1  | E0730 06:45:11.277033       1 prometheus.go:214] failed to register retries metric certificate: duplicate metrics collector registration attempted
server_1  | time="2019-07-30T06:45:11.533063276Z" level=info msg="waiting for node k3s CIDR not assigned yet"
server_1  | W0730 06:45:11.879567       1 shared_informer.go:312] resyncPeriod 56181289241720 is smaller than resyncCheckPeriod 82931945495129 and the informer has already started. Changing it to 82931945495129
server_1  | W0730 06:45:11.879873       1 shared_informer.go:312] resyncPeriod 68370930048165 is smaller than resyncCheckPeriod 82931945495129 and the informer has already started. Changing it to 82931945495129
server_1  | E0730 06:45:11.882731       1 resource_quota_controller.go:171] initial monitor sync has error: [couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "k3s.cattle.io/v1, Resource=listenerconfigs": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=listenerconfigs", couldn't start monitor for resource "k3s.cattle.io/v1, Resource=addons": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=addons", couldn't start monitor for resource "helm.cattle.io/v1, Resource=helmcharts": unable to monitor quota for resource "helm.cattle.io/v1, Resource=helmcharts"]
server_1  | W0730 06:45:12.717963       1 controllermanager.go:445] Skipping "root-ca-cert-publisher"
server_1  | time="2019-07-30T06:45:13.538022816Z" level=info msg="waiting for node k3s CIDR not assigned yet"
server_1  | time="2019-07-30T06:45:15.539865191Z" level=info msg="waiting for node k3s CIDR not assigned yet"
server_1  | time="2019-07-30T06:45:17.541884275Z" level=info msg="waiting for node k3s CIDR not assigned yet"
server_1  | E0730 06:45:17.666228       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | time="2019-07-30T06:45:19.543770956Z" level=info msg="waiting for node k3s CIDR not assigned yet"
server_1  | time="2019-07-30T06:45:21.545674866Z" level=info msg="waiting for node k3s CIDR not assigned yet"
server_1  | E0730 06:45:23.396461       1 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "helm.cattle.io/v1, Resource=helmcharts": unable to monitor quota for resource "helm.cattle.io/v1, Resource=helmcharts", couldn't start monitor for resource "k3s.cattle.io/v1, Resource=addons": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=addons", couldn't start monitor for resource "k3s.cattle.io/v1, Resource=listenerconfigs": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=listenerconfigs"]
server_1  | W0730 06:45:23.430925       1 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="k3s" does not exist
server_1  | time="2019-07-30T06:45:23.548805059Z" level=info msg="waiting for node k3s CIDR not assigned yet"
server_1  | E0730 06:45:23.583234       1 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
server_1  | E0730 06:45:23.584020       1 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
server_1  | W0730 06:45:24.078964       1 node_lifecycle_controller.go:833] Missing timestamp for Node k3s. Assuming now as a timestamp.
server_1  | E0730 06:45:27.674203       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:45:37.692757       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:45:47.712508       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:45:57.202133       1 daemon_controller.go:302] kube-system/svclb-traefik failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"svclb-traefik", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/svclb-traefik", UID:"af0a99b5-b295-11e9-a8da-0242ac1a0002", ResourceVersion:"464", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63700065956, loc:(*time.Location)(0x606a4a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"objectset.rio.cattle.io/hash":"f31475152fbf70655d3c016d368e90118938f6ea", "svccontroller.k3s.cattle.io/nodeselector":"false"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "objectset.rio.cattle.io/applied":"H4sIAAAAAAAA/8xUwW7jNhD9lWLOlEJZstcR0EOR7CFoNzFsby+LIBhR45g1RQrkSF3D0L8XVLJrp0k2AdrDHj188/T43qMPsNO2hhIukRpnV8QgAFv9J/mgnYUSsG3DWZ+BgIYYa2SE8gAWG4ISQq9MlbBH2ugdiHEcWlTxbNdVlIR9YGpAgPKErJ1d64YCY9NCaTtjBBisyITI6aq/SHEgTr12qUJmQ6l2Z1sMWyhhk2fFh2k2nWyqzQc5m07rXMlsVuezOZ3LLJuf5/PNjBBElKWcZe+MIZ/u8nDCZl1NgQwpdj6yogkEgwC01vEo8YdidP1w7SM/iFfB7m9LPrnvd1DCWZ+JX37Xtv51Rb7Xit7ce/T46O7b8Je8HwSMgCVtyJNVFKD8cnga8pjvYxOO8p4J6Mbb40bOMjnFpJqcT5Mso/ME5zUmclJMUGUopZzExI8Olew7Gm4HAaElFe09JnCABllt//heA2zbZ80aBgFMTWuQaVw5qeI7mvUS5Y9bEnr1r9sPJ+rjGmpL/sHKR6SpktZ5TuYSBOgG7+PQo1Vb8mc7o9uWfGKqspdplkbDI/p1hq0LvHCeoZxLcfzkt9FwK4Bsf7q+Wl7cLW6WaxDQo+niaC5hEN8Blx9X67vF8mZ9cwJZXyyeY95kuVqcnGcyLfJ0kqcTOYcozFNwnR+rdhgezVh0xiyc0WoPJVxtrh0vPAWy8R8nkOq85v2Fs0xfefQYW6y00azpIcO6hvILXH9c3/12+enqGm6H4UTUN+uKIv+v7j9QHO0vivyZ/0WRvy+ASPY/JPASzc8RwW18GF1bI9OKPTLd7yOW922UtXTGaHv/eTwHAf7J7/Hxf/1ssUdtsDIEZTaMD42Ru/GbqvOeLF93TUV+pbZUd4ZqKKUAO84+6RCejGsK2lP9+saSsN5DKYfhnwAAAP//vtyg+f4GAAA", "objectset.rio.cattle.io/id":"svccontroller", "objectset.rio.cattle.io/owner-gvk":"/v1, Kind=Service", "objectset.rio.cattle.io/owner-name":"traefik", "objectset.rio.cattle.io/owner-namespace":"kube-system"}, OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Service", Name:"traefik", UID:"af06105a-b295-11e9-a8da-0242ac1a0002", Controller:(*bool)(0xc007e314a0), BlockOwnerDeletion:(*bool)(nil)}}, Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0032e3440), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"svclb-traefik", "svccontroller.k3s.cattle.io/svcname":"traefik"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"lb-port-80", Image:"rancher/klipper-lb:v0.1.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"lb-port-80", HostPort:80, ContainerPort:80, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"SRC_PORT", Value:"80", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"DEST_PROTO", Value:"TCP", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"DEST_PORT", Value:"80", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"DEST_IP", Value:"10.43.23.208", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc007b35180), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"lb-port-443", Image:"rancher/klipper-lb:v0.1.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"lb-port-443", HostPort:443, ContainerPort:443, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"SRC_PORT", Value:"443", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"DEST_PROTO", Value:"TCP", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"DEST_PORT", Value:"443", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"DEST_IP", Value:"10.43.23.208", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc007b35220), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc007e315f0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc006a81920), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00661ae50)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc007e315f8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "svclb-traefik": the object has been modified; please apply your changes to the latest version and try again
server_1  | E0730 06:45:57.754219       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | W0730 06:45:59.018099       1 pod_container_deletor.go:75] Container "906b139aecf187aa47a76e0944e7495b30d3e25acc3ffafd74c24e703f24c8d4" not found in pod's containers
server_1  | E0730 06:45:59.323732       1 daemon_controller.go:302] default/noc-consul failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"noc-consul", GenerateName:"", Namespace:"default", SelfLink:"/apis/apps/v1/namespaces/default/daemonsets/noc-consul", UID:"b04d5fba-b295-11e9-a8da-0242ac1a0002", ResourceVersion:"627", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63700065958, loc:(*time.Location)(0x606a4a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"consul", "chart":"consul-helm", "heritage":"Tiller", "release":"noc"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00324fca0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"consul", "chart":"consul-helm", "component":"client", "hasDNS":"true", "release":"noc"}, Annotations:map[string]string{"consul.hashicorp.com/connect-inject":"false"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"data", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc00324fcc0), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"config", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc008121940), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"consul", Image:"consul:1.5.2", Command:[]string{"/bin/sh", "-ec", "CONSUL_FULLNAME=\"noc-consul\"\n\nexec /bin/consul agent \\\n  -node=\"${NODE}\" \\\n  -advertise=\"${POD_IP}\" \\\n  -bind=0.0.0.0 \\\n  -client=0.0.0.0 \\\n  -config-dir=/consul/config \\\n  -datacenter=dc1 \\\n  -data-dir=/consul/data \\\n  -retry-join=${CONSUL_FULLNAME}-server-0.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \\\n  -domain=consul\n"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"http", HostPort:8500, ContainerPort:8500, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"grpc", HostPort:8502, ContainerPort:8502, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"serflan", HostPort:0, ContainerPort:8301, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"serfwan", HostPort:0, ContainerPort:8302, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"server", HostPort:0, ContainerPort:8300, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"dns-tcp", HostPort:0, ContainerPort:8600, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"dns-udp", HostPort:0, ContainerPort:8600, Protocol:"UDP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00324fd00)}, v1.EnvVar{Name:"NAMESPACE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00324fd40)}, v1.EnvVar{Name:"NODE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00324fd80)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"data", ReadOnly:false, MountPath:"/consul/data", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"config", ReadOnly:false, MountPath:"/consul/config", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(0xc0065a61b0), Lifecycle:(*v1.Lifecycle)(0xc008387970), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0014a1f58), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"noc-consul-client", DeprecatedServiceAccount:"noc-consul-client", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002527b60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0073ce3c8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0014a1f7c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "noc-consul": the object has been modified; please apply your changes to the latest version and try again
server_1  | E0730 06:46:00.501964       1 pvc_protection_controller.go:138] PVC default/noc-nsqd-datadir-noc-nsqd-0 failed with : Operation cannot be fulfilled on persistentvolumeclaims "noc-nsqd-datadir-noc-nsqd-0": the object has been modified; please apply your changes to the latest version and try again
server_1  | E0730 06:46:07.814064       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:46:17.876494       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:46:27.935123       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:46:37.984641       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:46:48.122830       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:46:58.182104       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:47:08.233741       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:47:18.288674       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:47:28.777641       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:47:39.851077       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:47:50.830578       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:48:01.878200       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | W0730 06:48:07.785002       1 pod_container_deletor.go:75] Container "415e978c53ed8da7ded882b9d153110934530fd8d99aa593286bbafa9c7be194" not found in pod's containers
server_1  | W0730 06:48:10.974421       1 pod_container_deletor.go:75] Container "941e457ee43aa17f444925caabebc99b01383173ca04dcbf72efc3aab3c73718" not found in pod's containers
server_1  | E0730 06:48:11.421232       1 kubelet_pods.go:147] Mount cannot be satisfied for container "local-path-create", because the volume is missing or the volume mounter is nil: {Name:data ReadOnly:false MountPath:/data/ SubPath: MountPropagation:<nil> SubPathExpr:}
server_1  | E0730 06:48:11.421319       1 kuberuntime_manager.go:784] container start failed: CreateContainerConfigError: cannot find volume "data" to mount into container "local-path-create"
server_1  | E0730 06:48:11.421479       1 pod_workers.go:190] Error syncing pod b0c0c32e-b295-11e9-a8da-0242ac1a0002 ("create-pvc-afe3dafd-b295-11e9-a8da-0242ac1a0002_local-path-storage(b0c0c32e-b295-11e9-a8da-0242ac1a0002)"), skipping: failed to "StartContainer" for "local-path-create" with CreateContainerConfigError: "cannot find volume \"data\" to mount into container \"local-path-create\""
server_1  | E0730 06:48:11.933813       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:48:18.257064       1 kubelet_pods.go:147] Mount cannot be satisfied for container "local-path-create", because the volume is missing or the volume mounter is nil: {Name:data ReadOnly:false MountPath:/data/ SubPath: MountPropagation:<nil> SubPathExpr:}
server_1  | E0730 06:48:18.257338       1 kuberuntime_manager.go:784] container start failed: CreateContainerConfigError: cannot find volume "data" to mount into container "local-path-create"
server_1  | E0730 06:48:18.257472       1 pod_workers.go:190] Error syncing pod b08baa10-b295-11e9-a8da-0242ac1a0002 ("create-pvc-afe39303-b295-11e9-a8da-0242ac1a0002_local-path-storage(b08baa10-b295-11e9-a8da-0242ac1a0002)"), skipping: failed to "StartContainer" for "local-path-create" with CreateContainerConfigError: "cannot find volume \"data\" to mount into container \"local-path-create\""
server_1  | E0730 06:48:21.992572       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:48:27.799457       1 remote_runtime.go:226] StartContainer "f5bc2f3fc5e9f7f3b32ef315a8ef2ac55c593cfa2a8666b2f9a947a027195579" from runtime service failed: rpc error: code = Unknown desc = sandbox container "941e457ee43aa17f444925caabebc99b01383173ca04dcbf72efc3aab3c73718" is not running
server_1  | E0730 06:48:28.325582       1 kuberuntime_manager.go:784] container start failed: RunContainerError: sandbox container "941e457ee43aa17f444925caabebc99b01383173ca04dcbf72efc3aab3c73718" is not running
server_1  | E0730 06:48:28.325630       1 pod_workers.go:190] Error syncing pod b078cf4a-b295-11e9-a8da-0242ac1a0002 ("create-pvc-afe2d310-b295-11e9-a8da-0242ac1a0002_local-path-storage(b078cf4a-b295-11e9-a8da-0242ac1a0002)"), skipping: failed to "StartContainer" for "local-path-create" with RunContainerError: "sandbox container \"941e457ee43aa17f444925caabebc99b01383173ca04dcbf72efc3aab3c73718\" is not running"
server_1  | E0730 06:48:34.108233       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:48:35.136874       1 remote_runtime.go:226] StartContainer "5865057756394c200e72b2124651e71bbc5c7068a36b66c8a1a2f1c697608ed3" from runtime service failed: rpc error: code = Unknown desc = sandbox "415e978c53ed8da7ded882b9d153110934530fd8d99aa593286bbafa9c7be194" not found: does not exist
server_1  | E0730 06:48:35.137200       1 kuberuntime_manager.go:784] container start failed: RunContainerError: sandbox "415e978c53ed8da7ded882b9d153110934530fd8d99aa593286bbafa9c7be194" not found: does not exist
server_1  | E0730 06:48:35.137355       1 pod_workers.go:190] Error syncing pod b1b3a94b-b295-11e9-a8da-0242ac1a0002 ("create-pvc-b0d641f7-b295-11e9-a8da-0242ac1a0002_local-path-storage(b1b3a94b-b295-11e9-a8da-0242ac1a0002)"), skipping: failed to "StartContainer" for "local-path-create" with RunContainerError: "sandbox \"415e978c53ed8da7ded882b9d153110934530fd8d99aa593286bbafa9c7be194\" not found: does not exist"
server_1  | E0730 06:48:44.685390       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | W0730 06:48:53.526267       1 pod_container_deletor.go:75] Container "b84d665f27e884b25ed5c65f6edee0004ee3004b3a2b9e936dcc5f78323113c9" not found in pod's containers
server_1  | E0730 06:48:54.839181       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:49:01.399019       1 pv_protection_controller.go:116] PV pvc-afe39303-b295-11e9-a8da-0242ac1a0002 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-afe39303-b295-11e9-a8da-0242ac1a0002": the object has been modified; please apply your changes to the latest version and try again
server_1  | E0730 06:49:01.419947       1 remote_runtime.go:132] StopPodSandbox "ab1957999d6af7ad336007e313eec3f12890b553ef163054d8f2936411e1a17c" from runtime service failed: rpc error: code = Unknown desc = failed to destroy network for sandbox "ab1957999d6af7ad336007e313eec3f12890b553ef163054d8f2936411e1a17c": unknown FS magic on "/var/run/netns/cni-f90b3c71-78fa-17f6-111e-90c7085cfced": 1021994
server_1  | E0730 06:49:01.420017       1 kuberuntime_manager.go:850] Failed to stop sandbox {"containerd" "ab1957999d6af7ad336007e313eec3f12890b553ef163054d8f2936411e1a17c"}
server_1  | E0730 06:49:01.420059       1 kubelet_pods.go:1085] Failed killing the pod "create-pvc-afe39303-b295-11e9-a8da-0242ac1a0002": failed to "KillPodSandbox" for "0209061d-b296-11e9-a8da-0242ac1a0002" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for sandbox \"ab1957999d6af7ad336007e313eec3f12890b553ef163054d8f2936411e1a17c\": unknown FS magic on \"/var/run/netns/cni-f90b3c71-78fa-17f6-111e-90c7085cfced\": 1021994"
server_1  | E0730 06:49:01.435228       1 pv_protection_controller.go:116] PV pvc-afe39303-b295-11e9-a8da-0242ac1a0002 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-afe39303-b295-11e9-a8da-0242ac1a0002": the object has been modified; please apply your changes to the latest version and try again
server_1  | E0730 06:49:05.070289       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:49:12.742342       1 pv_protection_controller.go:116] PV pvc-afe3dafd-b295-11e9-a8da-0242ac1a0002 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-afe3dafd-b295-11e9-a8da-0242ac1a0002": the object has been modified; please apply your changes to the latest version and try again
server_1  | E0730 06:49:12.792855       1 pv_protection_controller.go:116] PV pvc-afe3dafd-b295-11e9-a8da-0242ac1a0002 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-afe3dafd-b295-11e9-a8da-0242ac1a0002": the object has been modified; please apply your changes to the latest version and try again
server_1  | E0730 06:49:12.843792       1 pv_protection_controller.go:116] PV pvc-afe3dafd-b295-11e9-a8da-0242ac1a0002 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-afe3dafd-b295-11e9-a8da-0242ac1a0002": the object has been modified; please apply your changes to the latest version and try again
server_1  | W0730 06:49:12.925449       1 pod_container_deletor.go:75] Container "ab1957999d6af7ad336007e313eec3f12890b553ef163054d8f2936411e1a17c" not found in pod's containers
server_1  | E0730 06:49:16.517550       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:49:27.188909       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | W0730 06:49:28.429216       1 pod_container_deletor.go:75] Container "b29cba2e7ea293fe562c95bcbc73cb69891590f7ddf9968b8881bfac99b84f8d" not found in pod's containers
server_1  | E0730 06:49:35.605191       1 remote_runtime.go:380] ExecSync bc26261a76a9c5365e8cf800b7f668fb07d129b55c8046860de6030512f42a29 '/bin/sh -ec curl http://127.0.0.1:8500/v1/status/leader 2>/dev/null | \
server_1  | grep -E '".+"'
server_1  | ' from runtime service failed: rpc error: code = Unknown desc = failed to exec in container: timeout 1s exceeded
server_1  | E0730 06:49:39.994332       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | W0730 06:49:40.391075       1 pod_container_deletor.go:75] Container "dc506cfa424d259f530699d4a0a14d7ac6062a8c93118fc02548d7da4fba883e" not found in pod's containers
server_1  | E0730 06:49:52.360359       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | W0730 06:50:01.121126       1 pod_container_deletor.go:75] Container "9a89bd63e9602ef83a5be72a31396cf8dde83c9e5215259ac8433a4113f783d7" not found in pod's containers
server_1  | E0730 06:50:03.193805       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | W0730 06:50:07.614243       1 info.go:52] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
server_1  | E0730 06:50:14.314706       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:50:25.363957       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | W0730 06:50:31.613776       1 reflector.go:289] object-"kube-system"/"coredns": watch of *v1.ConfigMap ended with: too old resource version: 363 (1181)
server_1  | E0730 06:50:41.528405       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:50:51.942449       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:51:02.553309       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:51:13.282968       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:51:23.650915       1 cri_stats_provider.go:576] Unable to fetch container log stats for path /var/log/pods/local-path-storage_create-pvc-b0d641f7-b295-11e9-a8da-0242ac1a0002_17709aba-b296-11e9-a8da-0242ac1a0002/local-path-create: failed command 'du' ($ nice -n 19 du -s -B 1) on path /var/log/pods/local-path-storage_create-pvc-b0d641f7-b295-11e9-a8da-0242ac1a0002_17709aba-b296-11e9-a8da-0242ac1a0002/local-path-create with error exit status 1 
server_1  | E0730 06:51:23.663244       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:51:34.762651       1 cri_stats_provider.go:576] Unable to fetch container log stats for path /var/log/pods/local-path-storage_create-pvc-b0d641f7-b295-11e9-a8da-0242ac1a0002_17709aba-b296-11e9-a8da-0242ac1a0002/local-path-create: failed command 'du' ($ nice -n 19 du -s -B 1) on path /var/log/pods/local-path-storage_create-pvc-b0d641f7-b295-11e9-a8da-0242ac1a0002_17709aba-b296-11e9-a8da-0242ac1a0002/local-path-create with error exit status 1 
server_1  | E0730 06:51:34.763589       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | W0730 06:51:36.428246       1 reflector.go:289] object-"default"/"noc-consul-client-config": watch of *v1.ConfigMap ended with: too old resource version: 581 (1305)
server_1  | E0730 06:51:46.383138       1 cri_stats_provider.go:576] Unable to fetch container log stats for path /var/log/pods/local-path-storage_create-pvc-b0d641f7-b295-11e9-a8da-0242ac1a0002_17709aba-b296-11e9-a8da-0242ac1a0002/local-path-create: failed command 'du' ($ nice -n 19 du -s -B 1) on path /var/log/pods/local-path-storage_create-pvc-b0d641f7-b295-11e9-a8da-0242ac1a0002_17709aba-b296-11e9-a8da-0242ac1a0002/local-path-create with error exit status 1 
server_1  | E0730 06:51:46.387042       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:51:56.512596       1 cri_stats_provider.go:576] Unable to fetch container log stats for path /var/log/pods/local-path-storage_create-pvc-b0d641f7-b295-11e9-a8da-0242ac1a0002_17709aba-b296-11e9-a8da-0242ac1a0002/local-path-create: failed command 'du' ($ nice -n 19 du -s -B 1) on path /var/log/pods/local-path-storage_create-pvc-b0d641f7-b295-11e9-a8da-0242ac1a0002_17709aba-b296-11e9-a8da-0242ac1a0002/local-path-create with error exit status 1 
server_1  | E0730 06:52:15.439108       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | W0730 06:52:07.053851       1 reflector.go:289] object-"local-path-storage"/"local-path-config": watch of *v1.ConfigMap ended with: too old resource version: 363 (1386)
server_1  | W0730 06:52:19.155969       1 reflector.go:289] object-"kube-system"/"traefik": watch of *v1.ConfigMap ended with: too old resource version: 439 (1413)
server_1  | E0730 06:52:26.600387       1 cri_stats_provider.go:576] Unable to fetch container log stats for path /var/log/pods/local-path-storage_create-pvc-b0d641f7-b295-11e9-a8da-0242ac1a0002_17709aba-b296-11e9-a8da-0242ac1a0002/local-path-create: failed command 'du' ($ nice -n 19 du -s -B 1) on path /var/log/pods/local-path-storage_create-pvc-b0d641f7-b295-11e9-a8da-0242ac1a0002_17709aba-b296-11e9-a8da-0242ac1a0002/local-path-create with error exit status 1 
server_1  | E0730 06:52:40.436702       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:52:51.362208       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:53:05.533209       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | W0730 06:53:08.182048       1 reflector.go:289] object-"default"/"noc-static-nginx-configmap": watch of *v1.ConfigMap ended with: too old resource version: 964 (1534)
server_1  | E0730 06:53:15.148699       1 remote_runtime.go:204] CreateContainer in sandbox "de1dac316faff4dbb7528c2d3ddbcfc4c4599d95e84bfec9a278ac8f390bc875" from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
server_1  | E0730 06:53:15.148805       1 kuberuntime_manager.go:754] init container start failed: CreateContainerError: context deadline exceeded
server_1  | E0730 06:53:15.148845       1 pod_workers.go:190] Error syncing pod b0544a86-b295-11e9-a8da-0242ac1a0002 ("noc-clickhouse-7cd4d6cc57-4v4g5_default(b0544a86-b295-11e9-a8da-0242ac1a0002)"), skipping: failed to "StartContainer" for "config" with CreateContainerError: "context deadline exceeded"
server_1  | E0730 06:53:15.631969       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:53:40.549652       1 status.go:71] apiserver received an error that is not an metav1.Status: sqlite3.Error{Code:5, ExtendedCode:5, err:"database is locked"}
server_1  | E0730 06:53:25.711639       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get cgroup stats for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": failed to get container info for "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy": unknown container "/docker/50f3c0bc445b2b68d798efd537850f8c8b1b9acb703716908cb1b0c335e66172/kube-proxy"
server_1  | E0730 06:53:29.431228       1 controller.go:115] failed to ensure node lease exists, will retry in 200ms, error: Get https://server:6443/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/k3s?timeout=10s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
server_1  | W0730 06:53:30.091626       1 reflector.go:289] object-"default"/"noc-noc-configmap": watch of *v1.ConfigMap ended with: too old resource version: 581 (1555)
server_1  | W0730 06:53:40.556481       1 status_manager.go:501] Failed to update status for pod "noc-clickhouse-7cd4d6cc57-4v4g5_default(b0544a86-b295-11e9-a8da-0242ac1a0002)": failed to patch status "{\"status\":{\"initContainerStatuses\":[{\"image\":\"alpine:3.10\",\"imageID\":\"\",\"lastState\":{},\"name\":\"config\",\"ready\":false,\"restartCount\":0,\"state\":{\"waiting\":{\"message\":\"context deadline exceeded\",\"reason\":\"CreateContainerError\"}}}],\"podIP\":\"10.42.0.49\"}}" for pod "default"/"noc-clickhouse-7cd4d6cc57-4v4g5": database is locked
server_1  | E0730 06:53:34.125062       1 remote_runtime.go:204] CreateContainer in sandbox "de1dac316faff4dbb7528c2d3ddbcfc4c4599d95e84bfec9a278ac8f390bc875" from runtime service failed: rpc error: code = Unknown desc = failed to reserve container name "config_noc-clickhouse-7cd4d6cc57-4v4g5_default_b0544a86-b295-11e9-a8da-0242ac1a0002_0": name "config_noc-clickhouse-7cd4d6cc57-4v4g5_default_b0544a86-b295-11e9-a8da-0242ac1a0002_0" is reserved for "2636738cfa0b029ebc8d98313e3110ba7a5d17ef3756a6643680d6d848bd0cc6"
server_1  | E0730 06:53:40.571836       1 kuberuntime_manager.go:754] init container start failed: CreateContainerError: failed to reserve container name "config_noc-clickhouse-7cd4d6cc57-4v4g5_default_b0544a86-b295-11e9-a8da-0242ac1a0002_0": name "config_noc-clickhouse-7cd4d6cc57-4v4g5_default_b0544a86-b295-11e9-a8da-0242ac1a0002_0" is reserved for "2636738cfa0b029ebc8d98313e3110ba7a5d17ef3756a6643680d6d848bd0cc6"
server_1  | E0730 06:53:40.572147       1 pod_workers.go:190] Error syncing pod b0544a86-b295-11e9-a8da-0242ac1a0002 ("noc-clickhouse-7cd4d6cc57-4v4g5_default(b0544a86-b295-11e9-a8da-0242ac1a0002)"), skipping: failed to "StartContainer" for "config" with CreateContainerError: "failed to reserve container name \"config_noc-clickhouse-7cd4d6cc57-4v4g5_default_b0544a86-b295-11e9-a8da-0242ac1a0002_0\": name \"config_noc-clickhouse-7cd4d6cc57-4v4g5_default_b0544a86-b295-11e9-a8da-0242ac1a0002_0\" is reserved for \"2636738cfa0b029ebc8d98313e3110ba7a5d17ef3756a6643680d6d848bd0cc6\""
server_1  | E0730 06:53:41.005614       1 remote_runtime.go:204] CreateContainer in sandbox "de1dac316faff4dbb7528c2d3ddbcfc4c4599d95e84bfec9a278ac8f390bc875" from runtime service failed: rpc error: code = Unknown desc = failed to reserve container name "config_noc-clickhouse-7cd4d6cc57-4v4g5_default_b0544a86-b295-11e9-a8da-0242ac1a0002_0": name "config_noc-clickhouse-7cd4d6cc57-4v4g5_default_b0544a86-b295-11e9-a8da-0242ac1a0002_0" is reserved for "2636738cfa0b029ebc8d98313e3110ba7a5d17ef3756a6643680d6d848bd0cc6"
server_1  | E0730 06:53:41.122760       1 kuberuntime_manager.go:754] init container start failed: CreateContainerError: failed to reserve container name "config_noc-clickhouse-7cd4d6cc57-4v4g5_default_b0544a86-b295-11e9-a8da-0242ac1a0002_0": name "config_noc-clickhouse-7cd4d6cc57-4v4g5_default_b0544a86-b295-11e9-a8da-0242ac1a0002_0" is reserved for "2636738cfa0b029ebc8d98313e3110ba7a5d17ef3756a6643680d6d848bd0cc6"
server_1  | E0730 06:53:41.122862       1 pod_workers.go:190] Error syncing pod b0544a86-b295-11e9-a8da-0242ac1a0002 ("noc-clickhouse-7cd4d6cc57-4v4g5_default(b0544a86-b295-11e9-a8da-0242ac1a0002)"), skipping: failed to "StartContainer" for "config" with CreateContainerError: "failed to reserve container name \"config_noc-clickhouse-7cd4d6cc57-4v4g5_default_b0544a86-b295-11e9-a8da-0242ac1a0002_0\": name \"config_noc-clickhouse-7cd4d6cc57-4v4g5_default_b0544a86-b295-11e9-a8da-0242ac1a0002_0\" is reserved for \"2636738cfa0b029ebc8d98313e3110ba7a5d17ef3756a6643680d6d848bd0cc6\""
server_1  | time="2019-07-30T06:54:02.268485784Z" level=fatal msg="leaderelection lost for k3s"
freeseacher commented 5 years ago

my debug story ended here https://github.com/kubernetes/kubernetes/issues/3312 and here https://github.com/rancher/norman/blob/ea122abac582d745a00dba0aaf946c29ee8d9d90/leader/leader.go#L55

erikwilson commented 5 years ago

Looking at the code you should be able to set the CATTLE_DEV_MODE environment variable for older versions of k3s, or DEV_LEADERELECTION environment variable on newer versions of k3s, to change the deadline from seconds to hours. I am curious if those variables help at all, the lease duration seems pretty reasonable at 45 seconds tho.

freeseacher commented 5 years ago

yep they are. but iowait is the main problem i'm fighting with.

one of my troubles was with containerd does pull same image for 25 times that is how it can be solved

command: server --kubelet-arg serialize-image-pulls=true

harshsharma22 commented 4 years ago

Got the same issue . my server node become not ready while got this on my logs Error updating node status, will retry: error getting node "k3s": Get https://127.0.0.1:6445/api/v1/nodes/k3s?resourceVersion=0&timeout=10s: context deadline exceeded (Client.Timeout exceeded while awaiting headers) and most probably same time my cpu is throttling

unixfox commented 3 years ago

I've the same issue on a droplet basic 2GB of RAM and with an external datastore. I tried to install rancher and then this error occurred after a few minutes. Then k3s automatically restarted but it kept exiting after around 5 minutes for the same reason. After a few automatic restarts of k3s the VPS really slowed down and became pretty much unresponsive.

I think these restarts is like the snowball effect, the load put on the server get worse and everytime k3s restart it is unable to fully recover due to the disk having more and more difficulties to handle the load.

@erikwilson

Looking at the code you should be able to set the CATTLE_DEV_MODE environment variable for older versions of k3s, or DEV_LEADERELECTION environment variable on newer versions of k3s, to change the deadline from seconds to hours. I am curious if those variables help at all, the lease duration seems pretty reasonable at 45 seconds tho.

How does one change these environment variables? I tried to find some documentation about that and even looking at the source code of k3s but I couldn't find anything on what to set for the environment variables.

erikwilson commented 3 years ago

If installing with systemd there is an env file created at /etc/systemd/system/k3s.service.env. It looks like CATTLE_DEV_MODE is the current env to set for the "wrangler" dependency.

The source of issues like this is usually artificial limits on some resource. Some cloud providers may throttle disk or network activity after a certain amount of traffic/time. Typically they will allow 'bursting' but not sustained speed over long periods of time. You are likely to see other problems even if changing the timeouts for wrangler. The best mitigation for problems like this may be to use higher tier allocations for vms or databases.

Would help to report k3s versions and size of instances, might also compare with v1.19 releases to see if any improvement.

unixfox commented 3 years ago

Thank you for your reply.

If installing with systemd there is an env file created at /etc/systemd/system/k3s.service.env. It looks like CATTLE_DEV_MODE is the current env to set for the "wrangler" dependency.

What's the value to set for CATTLE_DEV_MODE env in order to increase the timeout? Is it CATTLE_DEV_MODE=true?

Would help to report k3s versions and size of instances, might also compare with v1.19 releases to see if any improvement.

I can reproduce the issue on v1.18.9+k3s1, v1.17.12+k3s1 and v1.16.15+k3s1 but I didn't try on 1.19 yet. The instance (a cluster of one node) is the basic 2GB of RAM and 1 vCPU from this page: https://www.digitalocean.com/pricing/. I've tried postgresql, mariadb and etcd. On all of them it crashed while installing rancher but it didn't when using the default datastore (sqlite). The external datastore is hosted on a 1GB of RAM basic droplet.

erikwilson commented 3 years ago

Yah, true or any non-empty value should work.

A basic droplet (especially a small one) is probably part of the issue (it is designed for bursty stuff). Would use a general purpose droplet and you will probably see the problem go away. Unfortunately digital ocean doesn't seem to provide upfront information on those limits, that I can find easily anyways. If they limit the number of connections to the database, like AWS does, that may also be part of the issue. Does DO provide any sort of monitoring/traffic information which might be helpful here?

unixfox commented 3 years ago

If they limit the number of connections to the database, like AWS does, that may also be part of the issue.

I just wanted to clarify that I'm not using their managed database solution, I'm hosting the datastore myself on a basic droplet 1GB of RAM and 1vCPU.

Does DO provide any sort of monitoring/traffic information which might be helpful here?

Yes it does and I can provide some graphs after trying k3s 1.19 with an external datastore while installing rancher.

stale[bot] commented 3 years ago

This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 180 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions.