prometheus-community / helm-charts

Prometheus community Helm charts
Apache License 2.0
5.13k stars 5.03k forks source link

Kube-Prometheus-Stack Helm Chart v14.40 : Some Scrape Targets Are UnavailableOn macOS Catalina 10.15.7 When Using Default Values #812

Closed dcs3spp closed 2 years ago

dcs3spp commented 3 years ago

Also asked as question on stack overflow.

What happened? The myrelease-name-prometheus-node-exporter service is failing with errors from the daemonset received after installation of the helm chart for kube-prometheus-stack is installed on Docker Desktop for Mac Kubernetes Cluster environment.

The scrape targets for kube-scheduler:http://192.168.65.4:10251/metrics, kube-proxy:http://192.168.65.4:10249/metrics, kube-etcd:http://192.168.65.4:2379/metrics, kube-controller-manager:http://192.168.65.4:10252/metrics and node-exporter:http://192.168.65.4:9100/metrics are marked as unhealthy. All show as connection refused, except for kube-etcd which displays connection reset by peer.

I have installed kube-prometheus-stack as a dependency in my helm chart on a local Docker for Mac Kubernetes cluster v1.19.7. I have also tried this on a minikube cluster using the hyperkit vm-driver, with the same result.

Chart.yaml

apiVersion: v2
appVersion: "0.0.1"
description: A Helm chart for flaskapi deployment
name: flaskapi
version: 0.0.1
dependencies:
- name: kube-prometheus-stack
  version: "14.4.0"
  repository: "https://prometheus-community.github.io/helm-charts"
- name: ingress-nginx
  version: "3.25.0"
  repository: "https://kubernetes.github.io/ingress-nginx"
- name: redis
  version: "12.9.0"
  repository: "https://charts.bitnami.com/bitnami"

Values.yaml

docker_image_tag: dcs3spp/
hostname: flaskapi-service
redis_host: flaskapi-redis-master.default.svc.cluster.local 
redis_port: "6379"

prometheus:
  prometheusSpec:
    serviceMonitorSelectorNilUsesHelmValues: false
  extraScrapeConfigs: |
    - job_name: 'flaskapi'
    static_configs:
      - targets: ['flask-api-service:4444']

Did you expect to see some different? All kubernetes start successfully and all scrape targets marked as healthy.

How to reproduce it (as minimally and precisely as possible): On Docker desktop for Mac OS environment install the helm chart for Kube-Prometheus-Stack v14.40 and inspect status of the aforementioned failed scrape targets and view the logs for myrelease-name-prometheus-node-exporter service pod(s).

Environment Mac OS Catalina 10.15.7 Docker Desktop For Mac 3.2.2(61853) with docker engine v20.10.5 Local Kubernetes 1.19.7 Cluster provided by Docker Desktop For Mac

    Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:15:20Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}

Kubernetes cluster provided with Docker Desktop for Mac

insert manifests relevant to the issue

?

Insert Prometheus Operator logs relevant to the issue here

release-name-prometheus-node-exporter error log

MountVolume.SetUp failed for volume "flaskapi-prometheus-node-exporter-token-zft28" : failed to sync secret cache: timed out waiting for the condition
Error: failed to start container "node-exporter": Error response from daemon: path / is mounted on / but it is not a shared or slave mount
Back-off restarting failed container
Insert Prometheus logs relevant to the issue here

Anything else we need to know?:

kubectl get all

NAME                                                         READY   STATUS             RESTARTS   AGE
pod/alertmanager-flaskapi-kube-prometheus-s-alertmanager-0   2/2     Running            0          16m
pod/flask-deployment-775fcf8ff-2hp9s                         1/1     Running            0          16m
pod/flask-deployment-775fcf8ff-4qdjn                         1/1     Running            0          16m
pod/flask-deployment-775fcf8ff-6bvmv                         1/1     Running            0          16m
pod/flaskapi-grafana-6cb58f6656-77rqk                        2/2     Running            0          16m
pod/flaskapi-ingress-nginx-controller-ccfc7b6df-qvl7d        1/1     Running            0          16m
pod/flaskapi-kube-prometheus-s-operator-69f4bcf865-tq4q2     1/1     Running            0          16m
pod/flaskapi-kube-state-metrics-67c7f5f854-hbr27             1/1     Running            0          16m
pod/flaskapi-prometheus-node-exporter-7hgnm                  0/1     CrashLoopBackOff   8          16m
pod/flaskapi-redis-master-0                                  1/1     Running            0          16m
pod/flaskapi-redis-slave-0                                   1/1     Running            0          16m
pod/flaskapi-redis-slave-1                                   1/1     Running            0          15m
pod/prometheus-flaskapi-kube-prometheus-s-prometheus-0       2/2     Running            0          16m

NAME                                                  TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
service/alertmanager-operated                         ClusterIP      None             <none>        9093/TCP,9094/TCP,9094/UDP   16m
service/flask-api-service                             ClusterIP      10.108.242.86    <none>        4444/TCP                     16m
service/flaskapi-grafana                              ClusterIP      10.98.186.112    <none>        80/TCP                       16m
service/flaskapi-ingress-nginx-controller             LoadBalancer   10.102.217.51    localhost     80:30347/TCP,443:31422/TCP   16m
service/flaskapi-ingress-nginx-controller-admission   ClusterIP      10.99.21.136     <none>        443/TCP                      16m
service/flaskapi-kube-prometheus-s-alertmanager       ClusterIP      10.107.215.73    <none>        9093/TCP                     16m
service/flaskapi-kube-prometheus-s-operator           ClusterIP      10.107.162.227   <none>        443/TCP                      16m
service/flaskapi-kube-prometheus-s-prometheus         ClusterIP      10.96.168.75     <none>        9090/TCP                     16m
service/flaskapi-kube-state-metrics                   ClusterIP      10.100.118.21    <none>        8080/TCP                     16m
service/flaskapi-prometheus-node-exporter             ClusterIP      10.97.61.162     <none>        9100/TCP                     16m
service/flaskapi-redis-headless                       ClusterIP      None             <none>        6379/TCP                     16m
service/flaskapi-redis-master                         ClusterIP      10.96.192.160    <none>        6379/TCP                     16m
service/flaskapi-redis-slave                          ClusterIP      10.107.119.108   <none>        6379/TCP                     16m
service/kubernetes                                    ClusterIP      10.96.0.1        <none>        443/TCP                      5d1h
service/prometheus-operated                           ClusterIP      None             <none>        9090/TCP                     16m

NAME                                               DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/flaskapi-prometheus-node-exporter   1         1         0       1            0           <none>          16m

NAME                                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/flask-deployment                      3/3     3            3           16m
deployment.apps/flaskapi-grafana                      1/1     1            1           16m
deployment.apps/flaskapi-ingress-nginx-controller     1/1     1            1           16m
deployment.apps/flaskapi-kube-prometheus-s-operator   1/1     1            1           16m
deployment.apps/flaskapi-kube-state-metrics           1/1     1            1           16m

NAME                                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/flask-deployment-775fcf8ff                       3         3         3       16m
replicaset.apps/flaskapi-grafana-6cb58f6656                      1         1         1       16m
replicaset.apps/flaskapi-ingress-nginx-controller-ccfc7b6df      1         1         1       16m
replicaset.apps/flaskapi-kube-prometheus-s-operator-69f4bcf865   1         1         1       16m
replicaset.apps/flaskapi-kube-state-metrics-67c7f5f854           1         1         1       16m

NAME                                                                    READY   AGE
statefulset.apps/alertmanager-flaskapi-kube-prometheus-s-alertmanager   1/1     16m
statefulset.apps/flaskapi-redis-master                                  1/1     16m
statefulset.apps/flaskapi-redis-slave                                   2/2     16m
statefulset.apps/prometheus-flaskapi-kube-prometheus-s-prometheus       1/1     16m

After updating values.yaml to:

kube-prometheus-stack:
  prometheus-node-exporter:
    hostRootFsMount: false

The prometheus-node-exporter daemonset now starts based on earlier issue fix. However the scrape targets mentioned above still remain unhealthy with Get "http://192.168.65.4:<port_num>/metrics": dial tcp 192.168.65.4:<port_num>: connect: connection refused error.

Tried further investigation of kube-scheduler by sending a port-forward and visiting http://localhost:10251/metrics. Log output from pod is shown below:

I0401 12:41:46.438140       1 registry.go:173] Registering SelectorSpread plugin
I0401 12:41:46.438251       1 registry.go:173] Registering SelectorSpread plugin
I0401 12:41:46.776132       1 serving.go:331] Generated self-signed cert in-memory
W0401 12:41:49.935112       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0401 12:41:49.935338       1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0401 12:41:49.935419       1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous.
W0401 12:41:49.935456       1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0401 12:41:49.957792       1 registry.go:173] Registering SelectorSpread plugin
I0401 12:41:49.957938       1 registry.go:173] Registering SelectorSpread plugin
I0401 12:41:49.978214       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0401 12:41:49.978270       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0401 12:41:49.979536       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
E0401 12:41:49.980686       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0401 12:41:49.980853       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0401 12:41:49.981578       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0401 12:41:49.981760       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0401 12:41:49.982216       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0401 12:41:49.982979       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0401 12:41:49.983157       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0401 12:41:49.983549       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0401 12:41:49.983973       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0401 12:41:49.984143       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0401 12:41:49.984253       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0401 12:41:49.984328       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
I0401 12:41:49.979686       1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0401 12:41:49.983558       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0401 12:41:50.850095       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0401 12:41:50.854403       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0401 12:41:50.911345       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0401 12:41:50.929434       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0401 12:41:50.981549       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0401 12:41:50.996773       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0401 12:41:51.026263       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0401 12:41:51.039003       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0401 12:41:51.047324       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0401 12:41:51.148615       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
I0401 12:41:53.378670       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
I0401 12:41:54.281163       1 leaderelection.go:243] attempting to acquire leader lease  kube-system/kube-scheduler...
I0401 12:41:54.293715       1 leaderelection.go:253] successfully acquired lease kube-system/kube-scheduler

If I run in minikube on macOS with vm-driver hyperkit then the node-exporter daemonset is successful and with:

kube-prometheus-stack:
  prometheus-node-exporter:
    hostRootFsMount: true

Also the kube-proxy scrape target appears to be available in minikube which I have verified using a port forward kubectl -n kube-system port-forward kube-proxy-sxw8k 10249 and visiting http://localhost:10249/metrics. This also appears to work with a port forward in docker-desktop cluster, but appears as failing in prometheus targets....

So in minikube on macOS with hyperkit vm-driver and default helm chart values the following scrape targets are unavailable:

kube-controller-manager scrape target - connection refused
kube-etcd scrape target - connection reset by peer
kube scheduler scrape target - connection refused

etcd-minikube logs

2021-04-01 16:41:18.528348 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-04-01 16:41:28.528017 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-04-01 16:41:38.527905 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-04-01 16:41:43.305068 I | embed: rejected connection from "172.17.0.16:43812" (error "tls: first record does not look like a TLS handshake", ServerName "")
2021-04-01 16:41:48.528010 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-04-01 16:41:58.528190 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-04-01 16:42:08.528684 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-04-01 16:42:13.305000 I | embed: rejected connection from "172.17.0.16:44154" (error "tls: first record does not look like a TLS handshake", ServerName "")

kube-controller-manager-minikube logs:

Flag --port has been deprecated, see --secure-port instead.
I0401 16:06:14.332030       1 serving.go:331] Generated self-signed cert in-memory
I0401 16:06:14.996280       1 controllermanager.go:176] Version: v1.20.2
I0401 16:06:15.002308       1 secure_serving.go:197] Serving securely on 127.0.0.1:10257
I0401 16:06:15.002989       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0401 16:06:15.003201       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0401 16:06:15.003384       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0401 16:06:21.539935       1 shared_informer.go:240] Waiting for caches to sync for tokens
I0401 16:06:21.640301       1 shared_informer.go:247] Caches are synced for tokens 
I0401 16:06:21.677374       1 controllermanager.go:554] Started "disruption"
I0401 16:06:21.677824       1 disruption.go:331] Starting disruption controller
I0401 16:06:21.678146       1 shared_informer.go:240] Waiting for caches to sync for disruption
I0401 16:06:21.719177       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-serving"
I0401 16:06:21.719338       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
I0401 16:06:21.719561       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
I0401 16:06:21.720313       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-client"
I0401 16:06:21.720564       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client
I0401 16:06:21.720875       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
I0401 16:06:21.721627       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kube-apiserver-client"
I0401 16:06:21.722334       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
I0401 16:06:21.721912       1 controllermanager.go:554] Started "csrsigning"
I0401 16:06:21.721922       1 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown"
I0401 16:06:21.721928       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
I0401 16:06:21.722005       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
I0401 16:06:21.723584       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
I0401 16:06:21.754909       1 controllermanager.go:554] Started "bootstrapsigner"
I0401 16:06:21.755265       1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer
I0401 16:06:21.786524       1 controllermanager.go:554] Started "tokencleaner"
I0401 16:06:21.787000       1 tokencleaner.go:118] Starting token cleaner controller
I0401 16:06:21.789145       1 shared_informer.go:240] Waiting for caches to sync for token_cleaner
I0401 16:06:21.789319       1 shared_informer.go:247] Caches are synced for token_cleaner 
E0401 16:06:21.824456       1 core.go:92] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0401 16:06:21.824848       1 controllermanager.go:546] Skipping "service"
I0401 16:06:21.852762       1 controllermanager.go:554] Started "pvc-protection"
I0401 16:06:21.853020       1 pvc_protection_controller.go:110] Starting PVC protection controller
I0401 16:06:21.853268       1 shared_informer.go:240] Waiting for caches to sync for PVC protection
I0401 16:06:21.947680       1 serviceaccounts_controller.go:117] Starting service account controller
I0401 16:06:21.947962       1 shared_informer.go:240] Waiting for caches to sync for service account
I0401 16:06:21.948026       1 controllermanager.go:554] Started "serviceaccount"
I0401 16:06:22.194644       1 controllermanager.go:554] Started "replicationcontroller"
I0401 16:06:22.194729       1 replica_set.go:182] Starting replicationcontroller controller
I0401 16:06:22.194736       1 shared_informer.go:240] Waiting for caches to sync for ReplicationController
I0401 16:06:22.343114       1 controllermanager.go:554] Started "csrapproving"
I0401 16:06:22.343202       1 certificate_controller.go:118] Starting certificate controller "csrapproving"
I0401 16:06:22.343209       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving
I0401 16:06:22.492283       1 controllermanager.go:554] Started "csrcleaner"
I0401 16:06:22.492333       1 cleaner.go:82] Starting CSR cleaner controller
I0401 16:06:22.742766       1 node_lifecycle_controller.go:77] Sending events to api server
E0401 16:06:22.742877       1 core.go:232] failed to start cloud node lifecycle controller: no cloud provider provided
W0401 16:06:22.742893       1 controllermanager.go:546] Skipping "cloud-node-lifecycle"
I0401 16:06:23.010851       1 controllermanager.go:554] Started "persistentvolume-binder"
I0401 16:06:23.010955       1 pv_controller_base.go:307] Starting persistent volume controller
I0401 16:06:23.010965       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
I0401 16:06:23.243342       1 controllermanager.go:554] Started "endpointslicemirroring"
I0401 16:06:23.243468       1 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller
I0401 16:06:23.243535       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring
I0401 16:06:23.497822       1 controllermanager.go:554] Started "persistentvolume-expander"
I0401 16:06:23.498203       1 expand_controller.go:310] Starting expand controller
I0401 16:06:23.498220       1 shared_informer.go:240] Waiting for caches to sync for expand
I0401 16:06:23.743975       1 controllermanager.go:554] Started "statefulset"
I0401 16:06:23.744221       1 stateful_set.go:146] Starting stateful set controller
I0401 16:06:23.744322       1 shared_informer.go:240] Waiting for caches to sync for stateful set
I0401 16:06:24.014811       1 controllermanager.go:554] Started "namespace"
I0401 16:06:24.014907       1 namespace_controller.go:200] Starting namespace controller
I0401 16:06:24.014982       1 shared_informer.go:240] Waiting for caches to sync for namespace
I0401 16:06:24.242650       1 controllermanager.go:554] Started "job"
I0401 16:06:24.242670       1 job_controller.go:148] Starting job controller
I0401 16:06:24.242722       1 shared_informer.go:240] Waiting for caches to sync for job
I0401 16:06:24.492719       1 controllermanager.go:554] Started "cronjob"
I0401 16:06:24.492884       1 cronjob_controller.go:96] Starting CronJob Manager
I0401 16:06:24.745461       1 controllermanager.go:554] Started "attachdetach"
I0401 16:06:24.745597       1 attach_detach_controller.go:328] Starting attach detach controller
I0401 16:06:24.745633       1 shared_informer.go:240] Waiting for caches to sync for attach detach
I0401 16:06:25.444390       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpoints
I0401 16:06:25.444498       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for controllerrevisions.apps
I0401 16:06:25.444531       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
I0401 16:06:25.444644       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for cronjobs.batch
I0401 16:06:25.444720       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.extensions
I0401 16:06:25.444759       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
I0401 16:06:25.444783       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for daemonsets.apps
I0401 16:06:25.444863       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
I0401 16:06:25.445019       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
I0401 16:06:25.445104       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
I0401 16:06:25.445320       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
I0401 16:06:25.445381       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for jobs.batch
I0401 16:06:25.445481       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for limitranges
I0401 16:06:25.445521       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for deployments.apps
I0401 16:06:25.445533       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for replicasets.apps
I0401 16:06:25.445644       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
I0401 16:06:25.445684       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for statefulsets.apps
I0401 16:06:25.445832       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for events.events.k8s.io
I0401 16:06:25.445960       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
I0401 16:06:25.446092       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for podtemplates
W0401 16:06:25.446127       1 shared_informer.go:494] resyncPeriod 17h55m56.960970235s is smaller than resyncCheckPeriod 19h7m1.626240343s and the informer has already started. Changing it to 19h7m1.626240343s
I0401 16:06:25.446168       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for serviceaccounts
I0401 16:06:25.446319       1 controllermanager.go:554] Started "resourcequota"
I0401 16:06:25.446408       1 resource_quota_controller.go:273] Starting resource quota controller
I0401 16:06:25.446416       1 shared_informer.go:240] Waiting for caches to sync for resource quota
I0401 16:06:25.446433       1 resource_quota_monitor.go:304] QuotaMonitor running
I0401 16:06:25.476187       1 controllermanager.go:554] Started "ttl"
I0401 16:06:25.476338       1 ttl_controller.go:121] Starting TTL controller
I0401 16:06:25.476366       1 shared_informer.go:240] Waiting for caches to sync for TTL
I0401 16:06:25.484145       1 node_lifecycle_controller.go:380] Sending events to api server.
I0401 16:06:25.484633       1 taint_manager.go:163] Sending events to api server.
I0401 16:06:25.484936       1 node_lifecycle_controller.go:508] Controller will reconcile labels.
I0401 16:06:25.485158       1 controllermanager.go:554] Started "nodelifecycle"
I0401 16:06:25.485558       1 node_lifecycle_controller.go:542] Starting node controller
I0401 16:06:25.485891       1 shared_informer.go:240] Waiting for caches to sync for taint
I0401 16:06:25.643007       1 controllermanager.go:554] Started "root-ca-cert-publisher"
I0401 16:06:25.643052       1 publisher.go:98] Starting root CA certificate configmap publisher
I0401 16:06:25.643906       1 shared_informer.go:240] Waiting for caches to sync for crt configmap
I0401 16:06:26.342649       1 controllermanager.go:554] Started "horizontalpodautoscaling"
I0401 16:06:26.342730       1 horizontal.go:169] Starting HPA controller
I0401 16:06:26.342736       1 shared_informer.go:240] Waiting for caches to sync for HPA
I0401 16:06:26.592210       1 request.go:655] Throttling request took 1.04783843s, request: GET:https://192.168.64.75:8443/apis/extensions/v1beta1?timeout=32s
I0401 16:06:26.592542       1 controllermanager.go:554] Started "podgc"
I0401 16:06:26.592574       1 gc_controller.go:89] Starting GC controller
I0401 16:06:26.592741       1 shared_informer.go:240] Waiting for caches to sync for GC
I0401 16:06:26.844144       1 controllermanager.go:554] Started "replicaset"
W0401 16:06:26.844398       1 core.go:246] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.
W0401 16:06:26.844536       1 controllermanager.go:546] Skipping "route"
W0401 16:06:26.844592       1 controllermanager.go:546] Skipping "ephemeral-volume"
I0401 16:06:26.844374       1 replica_set.go:182] Starting replicaset controller
I0401 16:06:26.844761       1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
I0401 16:06:27.092788       1 controllermanager.go:554] Started "endpoint"
I0401 16:06:27.093115       1 endpoints_controller.go:184] Starting endpoint controller
I0401 16:06:27.093473       1 shared_informer.go:240] Waiting for caches to sync for endpoint
I0401 16:06:27.343182       1 controllermanager.go:554] Started "deployment"
I0401 16:06:27.343340       1 deployment_controller.go:153] Starting deployment controller
I0401 16:06:27.343499       1 shared_informer.go:240] Waiting for caches to sync for deployment
I0401 16:06:27.491907       1 node_ipam_controller.go:91] Sending events to api server.
I0401 16:06:37.500501       1 range_allocator.go:82] Sending events to api server.
I0401 16:06:37.500844       1 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses.
I0401 16:06:37.500950       1 controllermanager.go:554] Started "nodeipam"
I0401 16:06:37.501079       1 node_ipam_controller.go:159] Starting ipam controller
I0401 16:06:37.501260       1 shared_informer.go:240] Waiting for caches to sync for node
I0401 16:06:37.525453       1 controllermanager.go:554] Started "clusterrole-aggregation"
I0401 16:06:37.525720       1 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator
I0401 16:06:37.525926       1 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator
I0401 16:06:37.542316       1 controllermanager.go:554] Started "pv-protection"
W0401 16:06:37.543612       1 controllermanager.go:546] Skipping "ttl-after-finished"
I0401 16:06:37.543362       1 pv_protection_controller.go:83] Starting PV protection controller
I0401 16:06:37.543917       1 shared_informer.go:240] Waiting for caches to sync for PV protection
I0401 16:06:37.558504       1 controllermanager.go:554] Started "endpointslice"
I0401 16:06:37.559120       1 endpointslice_controller.go:237] Starting endpoint slice controller
I0401 16:06:37.559220       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
I0401 16:06:37.577007       1 controllermanager.go:554] Started "daemonset"
I0401 16:06:37.577531       1 daemon_controller.go:285] Starting daemon sets controller
I0401 16:06:37.578096       1 shared_informer.go:240] Waiting for caches to sync for daemon sets
I0401 16:06:37.607803       1 controllermanager.go:554] Started "garbagecollector"
I0401 16:06:37.609142       1 garbagecollector.go:142] Starting garbage collector controller
I0401 16:06:37.614114       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0401 16:06:37.609468       1 shared_informer.go:240] Waiting for caches to sync for resource quota
I0401 16:06:37.616534       1 graph_builder.go:289] GraphBuilder running
I0401 16:06:37.634823       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
I0401 16:06:37.650428       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
I0401 16:06:37.636324       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
I0401 16:06:37.650233       1 shared_informer.go:247] Caches are synced for service account 
I0401 16:06:37.654228       1 shared_informer.go:247] Caches are synced for PV protection 
I0401 16:06:37.694268       1 shared_informer.go:247] Caches are synced for endpoint 
I0401 16:06:37.694784       1 shared_informer.go:247] Caches are synced for ReplicationController 
I0401 16:06:37.699780       1 shared_informer.go:247] Caches are synced for expand 
W0401 16:06:37.703568       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I0401 16:06:37.711094       1 shared_informer.go:247] Caches are synced for persistent volume 
I0401 16:06:37.715090       1 shared_informer.go:247] Caches are synced for namespace 
I0401 16:06:37.721173       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
I0401 16:06:37.723207       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
I0401 16:06:37.727143       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
I0401 16:06:37.744168       1 shared_informer.go:247] Caches are synced for crt configmap 
I0401 16:06:37.744516       1 shared_informer.go:247] Caches are synced for job 
I0401 16:06:37.744728       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
I0401 16:06:37.744949       1 shared_informer.go:247] Caches are synced for ReplicaSet 
I0401 16:06:37.744191       1 shared_informer.go:247] Caches are synced for HPA 
I0401 16:06:37.745174       1 shared_informer.go:247] Caches are synced for stateful set 
I0401 16:06:37.749194       1 shared_informer.go:247] Caches are synced for attach detach 
I0401 16:06:37.754341       1 shared_informer.go:247] Caches are synced for PVC protection 
I0401 16:06:37.760543       1 shared_informer.go:247] Caches are synced for endpoint_slice 
I0401 16:06:37.761479       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
I0401 16:06:37.777149       1 shared_informer.go:247] Caches are synced for TTL 
I0401 16:06:37.778826       1 shared_informer.go:247] Caches are synced for daemon sets 
I0401 16:06:37.787208       1 shared_informer.go:247] Caches are synced for taint 
I0401 16:06:37.787355       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
W0401 16:06:37.788067       1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0401 16:06:37.788136       1 node_lifecycle_controller.go:1245] Controller detected that zone  is now in state Normal.
I0401 16:06:37.788587       1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller"
I0401 16:06:37.788638       1 taint_manager.go:187] Starting NoExecuteTaintManager
I0401 16:06:37.793651       1 shared_informer.go:247] Caches are synced for GC 
I0401 16:06:37.803380       1 shared_informer.go:247] Caches are synced for node 
I0401 16:06:37.803532       1 range_allocator.go:172] Starting range CIDR allocator
I0401 16:06:37.803542       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
I0401 16:06:37.803547       1 shared_informer.go:247] Caches are synced for cidrallocator 
I0401 16:06:37.879657       1 shared_informer.go:247] Caches are synced for disruption 
I0401 16:06:37.879737       1 disruption.go:339] Sending events to api server.
I0401 16:06:37.921980       1 shared_informer.go:247] Caches are synced for resource quota 
I0401 16:06:37.925681       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-sxw8k"
I0401 16:06:37.943744       1 shared_informer.go:247] Caches are synced for deployment 
I0401 16:06:37.946624       1 shared_informer.go:247] Caches are synced for resource quota 
I0401 16:06:37.995092       1 range_allocator.go:373] Set node minikube PodCIDR to [10.244.0.0/24]
I0401 16:06:38.011439       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 1"
I0401 16:06:38.100948       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-h224q"
E0401 16:06:38.150826       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"13e4acd5-0d85-417a-805a-53a1b17643cd", ResourceVersion:"263", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752889982, loc:(*time.Location)(0x6f31360)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001d56200), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001d56220)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001d56240), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001be4c00), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001d56260), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001d56280), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001d562c0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001d137a0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000daf808), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0001f73b0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00057a208)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000daf858)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
I0401 16:06:38.583978       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0401 16:06:38.616968       1 shared_informer.go:247] Caches are synced for garbage collector 
I0401 16:06:38.617009       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0401 16:06:38.684548       1 shared_informer.go:247] Caches are synced for garbage collector 
I0401 16:07:08.493765       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-f6647bd8c to 1"
I0401 16:07:08.517304       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-f6647bd8c" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-f6647bd8c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
I0401 16:07:08.532946       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-968bcb79 to 1"
I0401 16:07:08.537995       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-968bcb79" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-968bcb79-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
E0401 16:07:08.550459       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-f6647bd8c" failed with pods "dashboard-metrics-scraper-f6647bd8c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0401 16:07:08.565275       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-968bcb79" failed with pods "kubernetes-dashboard-968bcb79-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0401 16:07:08.566349       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-f6647bd8c" failed with pods "dashboard-metrics-scraper-f6647bd8c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0401 16:07:08.566858       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-f6647bd8c" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-f6647bd8c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
I0401 16:07:08.578092       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-968bcb79" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-968bcb79-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
I0401 16:07:08.579528       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-f6647bd8c" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-f6647bd8c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
E0401 16:07:08.579887       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-968bcb79" failed with pods "kubernetes-dashboard-968bcb79-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0401 16:07:08.579389       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-f6647bd8c" failed with pods "dashboard-metrics-scraper-f6647bd8c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0401 16:07:08.596984       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-968bcb79" failed with pods "kubernetes-dashboard-968bcb79-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0401 16:07:08.598551       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-f6647bd8c" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-f6647bd8c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
E0401 16:07:08.596984       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-f6647bd8c" failed with pods "dashboard-metrics-scraper-f6647bd8c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0401 16:07:08.600320       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-968bcb79" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-968bcb79-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
E0401 16:07:08.611198       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-f6647bd8c" failed with pods "dashboard-metrics-scraper-f6647bd8c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0401 16:07:08.611547       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-f6647bd8c" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-f6647bd8c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
E0401 16:07:08.611777       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-968bcb79" failed with pods "kubernetes-dashboard-968bcb79-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0401 16:07:08.611781       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-968bcb79" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-968bcb79-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
I0401 16:07:08.717368       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-f6647bd8c" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-f6647bd8c-qslh6"
I0401 16:07:09.636526       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-968bcb79" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-968bcb79-fqqmn"
I0401 16:07:25.112561       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-56c4f8c9d6 to 1"
I0401 16:07:25.131201       1 event.go:291] "Event occurred" object="kube-system/metrics-server-56c4f8c9d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-56c4f8c9d6-n48pk"
I0401 16:09:02.848006       1 event.go:291] "Event occurred" object="default/flaskapi-ingress-nginx-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flaskapi-ingress-nginx-admission-create-lmzrc"
I0401 16:09:10.485778       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for thanosrulers.monitoring.coreos.com
I0401 16:09:10.485834       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for podmonitors.monitoring.coreos.com
I0401 16:09:10.485856       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for alertmanagerconfigs.monitoring.coreos.com
I0401 16:09:10.485870       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for servicemonitors.monitoring.coreos.com
I0401 16:09:10.485893       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for alertmanagers.monitoring.coreos.com
I0401 16:09:10.485948       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for prometheusrules.monitoring.coreos.com
I0401 16:09:10.485970       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for probes.monitoring.coreos.com
I0401 16:09:10.485984       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for prometheuses.monitoring.coreos.com
I0401 16:09:10.486182       1 shared_informer.go:240] Waiting for caches to sync for resource quota
I0401 16:09:10.786404       1 shared_informer.go:247] Caches are synced for resource quota 
I0401 16:09:11.852285       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0401 16:09:11.852325       1 shared_informer.go:247] Caches are synced for garbage collector 
I0401 16:09:13.758322       1 event.go:291] "Event occurred" object="default/flaskapi-ingress-nginx-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
I0401 16:09:14.793618       1 event.go:291] "Event occurred" object="default/flaskapi-kube-prometheus-s-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flaskapi-kube-prometheus-s-admission-create-mc9fl"
I0401 16:09:24.872127       1 event.go:291] "Event occurred" object="default/flaskapi-kube-prometheus-s-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
I0401 16:09:25.750615       1 event.go:291] "Event occurred" object="default/flask-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set flask-deployment-775fcf8ff to 3"
I0401 16:09:25.753240       1 event.go:291] "Event occurred" object="default/flaskapi-ingress-nginx-controller" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set flaskapi-ingress-nginx-controller-ccfc7b6df to 1"
I0401 16:09:25.756355       1 event.go:291] "Event occurred" object="default/flaskapi-kube-prometheus-s-operator" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set flaskapi-kube-prometheus-s-operator-69f4bcf865 to 1"
I0401 16:09:25.760399       1 event.go:291] "Event occurred" object="default/flaskapi-grafana" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set flaskapi-grafana-6cb58f6656 to 1"
I0401 16:09:25.766380       1 event.go:291] "Event occurred" object="default/flaskapi-prometheus-node-exporter" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flaskapi-prometheus-node-exporter-np264"
I0401 16:09:25.807738       1 event.go:291] "Event occurred" object="default/flask-deployment-775fcf8ff" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flask-deployment-775fcf8ff-nkg9c"
I0401 16:09:25.808661       1 event.go:291] "Event occurred" object="default/flaskapi-kube-prometheus-s-operator-69f4bcf865" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flaskapi-kube-prometheus-s-operator-69f4bcf865-2776k"
I0401 16:09:25.833960       1 event.go:291] "Event occurred" object="default/flaskapi-kube-state-metrics" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set flaskapi-kube-state-metrics-67c7f5f854 to 1"
I0401 16:09:25.846457       1 event.go:291] "Event occurred" object="default/flaskapi-kube-state-metrics-67c7f5f854" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flaskapi-kube-state-metrics-67c7f5f854-k49nn"
I0401 16:09:25.849658       1 event.go:291] "Event occurred" object="default/flask-deployment-775fcf8ff" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flask-deployment-775fcf8ff-xbdfn"
I0401 16:09:25.850402       1 event.go:291] "Event occurred" object="default/flask-deployment-775fcf8ff" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flask-deployment-775fcf8ff-j5fmd"
I0401 16:09:25.850472       1 event.go:291] "Event occurred" object="default/flaskapi-ingress-nginx-controller-ccfc7b6df" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flaskapi-ingress-nginx-controller-ccfc7b6df-wvmpp"
I0401 16:09:25.854773       1 event.go:291] "Event occurred" object="default/flaskapi-grafana-6cb58f6656" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flaskapi-grafana-6cb58f6656-c5nhd"
I0401 16:09:25.961435       1 event.go:291] "Event occurred" object="default/flaskapi-redis-master" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Claim redis-data-flaskapi-redis-master-0 Pod flaskapi-redis-master-0 in StatefulSet flaskapi-redis-master success"
I0401 16:09:25.967957       1 event.go:291] "Event occurred" object="default/flaskapi-redis-slave" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Claim redis-data-flaskapi-redis-slave-0 Pod flaskapi-redis-slave-0 in StatefulSet flaskapi-redis-slave success"
I0401 16:09:26.114890       1 event.go:291] "Event occurred" object="default/redis-data-flaskapi-redis-master-0" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I0401 16:09:26.124507       1 event.go:291] "Event occurred" object="default/redis-data-flaskapi-redis-slave-0" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I0401 16:09:26.142136       1 event.go:291] "Event occurred" object="default/flaskapi-redis-master" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod flaskapi-redis-master-0 in StatefulSet flaskapi-redis-master successful"
I0401 16:09:26.142200       1 event.go:291] "Event occurred" object="default/flaskapi-redis-slave" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod flaskapi-redis-slave-0 in StatefulSet flaskapi-redis-slave successful"
E0401 16:09:26.143787       1 daemon_controller.go:320] default/flaskapi-prometheus-node-exporter failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"flaskapi-prometheus-node-exporter", GenerateName:"", Namespace:"default", SelfLink:"", UID:"857c38d0-952d-48d0-883c-0e068bc798ba", ResourceVersion:"956", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752890165, loc:(*time.Location)(0x6f31360)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"prometheus-node-exporter", "app.kubernetes.io/managed-by":"Helm", "chart":"prometheus-node-exporter-1.16.2", "heritage":"Helm", "jobLabel":"node-exporter", "release":"flaskapi"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "meta.helm.sh/release-name":"flaskapi", "meta.helm.sh/release-namespace":"default"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"Go-http-client", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000dda320), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000dda340)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000dda360), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"prometheus-node-exporter", "chart":"prometheus-node-exporter-1.16.2", "heritage":"Helm", "jobLabel":"node-exporter", "release":"flaskapi"}, Annotations:map[string]string{"cluster-autoscaler.kubernetes.io/safe-to-evict":"true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"proc", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000dda380), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"sys", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000dda3a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"node-exporter", Image:"quay.io/prometheus/node-exporter:v1.1.2", Command:[]string(nil), Args:[]string{"--path.procfs=/host/proc", "--path.sysfs=/host/sys", "--web.listen-address=$(HOST_IP):9100", "--collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/.+)($|/)", "--collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"metrics", HostPort:9100, ContainerPort:9100, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"0.0.0.0", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"proc", ReadOnly:true, MountPath:"/host/proc", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"sys", ReadOnly:true, MountPath:"/host/sys", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc001930e10), ReadinessProbe:(*v1.Probe)(0xc001930e40), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000e9d310), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"flaskapi-prometheus-node-exporter", DeprecatedServiceAccount:"flaskapi-prometheus-node-exporter", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:true, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0006acd90), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00057a8e8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000e9d37c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "flaskapi-prometheus-node-exporter": the object has been modified; please apply your changes to the latest version and try again
I0401 16:09:28.464271       1 event.go:291] "Event occurred" object="default/flaskapi-ingress-nginx-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flaskapi-ingress-nginx-admission-patch-g55gp"
I0401 16:09:32.825842       1 event.go:291] "Event occurred" object="default/flaskapi-ingress-nginx-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
I0401 16:09:33.900875       1 event.go:291] "Event occurred" object="default/flaskapi-kube-prometheus-s-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flaskapi-kube-prometheus-s-admission-patch-t5ll4"
I0401 16:09:36.013673       1 event.go:291] "Event occurred" object="default/flaskapi-kube-prometheus-s-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
I0401 16:10:32.234642       1 event.go:291] "Event occurred" object="default/alertmanager-flaskapi-kube-prometheus-s-alertmanager" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod alertmanager-flaskapi-kube-prometheus-s-alertmanager-0 in StatefulSet alertmanager-flaskapi-kube-prometheus-s-alertmanager successful"
I0401 16:10:32.528256       1 event.go:291] "Event occurred" object="default/prometheus-flaskapi-kube-prometheus-s-prometheus" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod prometheus-flaskapi-kube-prometheus-s-prometheus-0 in StatefulSet prometheus-flaskapi-kube-prometheus-s-prometheus successful"
I0401 16:11:59.086830       1 event.go:291] "Event occurred" object="default/flaskapi-redis-slave" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Claim redis-data-flaskapi-redis-slave-1 Pod flaskapi-redis-slave-1 in StatefulSet flaskapi-redis-slave success"
I0401 16:11:59.093912       1 event.go:291] "Event occurred" object="default/redis-data-flaskapi-redis-slave-1" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I0401 16:11:59.100395       1 event.go:291] "Event occurred" object="default/flaskapi-redis-slave" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod flaskapi-redis-slave-1 in StatefulSet flaskapi-redis-slave successful"
I0401 16:20:42.744312       1 stateful_set.go:419] StatefulSet has been deleted default/flaskapi-redis-master
I0401 16:20:42.748185       1 stateful_set.go:419] StatefulSet has been deleted default/flaskapi-redis-slave
I0401 16:20:45.322946       1 stateful_set.go:419] StatefulSet has been deleted default/alertmanager-flaskapi-kube-prometheus-s-alertmanager
I0401 16:20:46.321341       1 stateful_set.go:419] StatefulSet has been deleted default/prometheus-flaskapi-kube-prometheus-s-prometheus
I0401 16:21:59.008899       1 event.go:291] "Event occurred" object="default/flaskapi-ingress-nginx-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flaskapi-ingress-nginx-admission-create-d2tbc"
I0401 16:22:00.896670       1 event.go:291] "Event occurred" object="default/flaskapi-ingress-nginx-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
I0401 16:22:01.796330       1 event.go:291] "Event occurred" object="default/flaskapi-kube-prometheus-s-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flaskapi-kube-prometheus-s-admission-create-rwsm7"
I0401 16:22:02.904789       1 event.go:291] "Event occurred" object="default/flaskapi-kube-prometheus-s-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
I0401 16:22:03.473790       1 event.go:291] "Event occurred" object="default/flaskapi-kube-state-metrics" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set flaskapi-kube-state-metrics-67c7f5f854 to 1"
I0401 16:22:03.485960       1 event.go:291] "Event occurred" object="default/flaskapi-kube-state-metrics-67c7f5f854" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flaskapi-kube-state-metrics-67c7f5f854-lftt9"
I0401 16:22:03.502579       1 event.go:291] "Event occurred" object="default/flaskapi-ingress-nginx-controller" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set flaskapi-ingress-nginx-controller-ccfc7b6df to 1"
I0401 16:22:03.503474       1 event.go:291] "Event occurred" object="default/flask-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set flask-deployment-775fcf8ff to 3"
I0401 16:22:03.503843       1 event.go:291] "Event occurred" object="default/flaskapi-kube-prometheus-s-operator" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set flaskapi-kube-prometheus-s-operator-69f4bcf865 to 1"
I0401 16:22:03.504561       1 event.go:291] "Event occurred" object="default/flaskapi-grafana" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set flaskapi-grafana-6cb58f6656 to 1"
I0401 16:22:03.505840       1 event.go:291] "Event occurred" object="default/flaskapi-prometheus-node-exporter" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flaskapi-prometheus-node-exporter-rmvqt"
I0401 16:22:03.556109       1 event.go:291] "Event occurred" object="default/flaskapi-kube-prometheus-s-operator-69f4bcf865" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flaskapi-kube-prometheus-s-operator-69f4bcf865-m5kb7"
I0401 16:22:03.610087       1 event.go:291] "Event occurred" object="default/flaskapi-ingress-nginx-controller-ccfc7b6df" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flaskapi-ingress-nginx-controller-ccfc7b6df-wz78f"
I0401 16:22:03.610125       1 event.go:291] "Event occurred" object="default/flask-deployment-775fcf8ff" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flask-deployment-775fcf8ff-7svm5"
I0401 16:22:03.615615       1 event.go:291] "Event occurred" object="default/flaskapi-grafana-6cb58f6656" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flaskapi-grafana-6cb58f6656-jj7k8"
I0401 16:22:03.646023       1 event.go:291] "Event occurred" object="default/flaskapi-redis-slave" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod flaskapi-redis-slave-0 in StatefulSet flaskapi-redis-slave successful"
I0401 16:22:03.654441       1 event.go:291] "Event occurred" object="default/flask-deployment-775fcf8ff" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flask-deployment-775fcf8ff-9g7np"
I0401 16:22:03.668777       1 event.go:291] "Event occurred" object="default/flaskapi-redis-master" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod flaskapi-redis-master-0 in StatefulSet flaskapi-redis-master successful"
I0401 16:22:03.668996       1 event.go:291] "Event occurred" object="default/flask-deployment-775fcf8ff" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flask-deployment-775fcf8ff-jshnj"
I0401 16:22:06.203558       1 event.go:291] "Event occurred" object="default/flaskapi-ingress-nginx-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flaskapi-ingress-nginx-admission-patch-5t8sj"
I0401 16:22:09.610975       1 event.go:291] "Event occurred" object="default/alertmanager-flaskapi-kube-prometheus-s-alertmanager" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod alertmanager-flaskapi-kube-prometheus-s-alertmanager-0 in StatefulSet alertmanager-flaskapi-kube-prometheus-s-alertmanager successful"
I0401 16:22:10.335250       1 event.go:291] "Event occurred" object="default/prometheus-flaskapi-kube-prometheus-s-prometheus" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod prometheus-flaskapi-kube-prometheus-s-prometheus-0 in StatefulSet prometheus-flaskapi-kube-prometheus-s-prometheus successful"
I0401 16:22:14.901287       1 event.go:291] "Event occurred" object="default/flaskapi-ingress-nginx-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
I0401 16:22:16.180911       1 event.go:291] "Event occurred" object="default/flaskapi-kube-prometheus-s-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flaskapi-kube-prometheus-s-admission-patch-kglv8"
I0401 16:22:23.096697       1 event.go:291] "Event occurred" object="default/flaskapi-kube-prometheus-s-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
I0401 16:22:43.441760       1 event.go:291] "Event occurred" object="default/flaskapi-redis-slave" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod flaskapi-redis-slave-1 in StatefulSet flaskapi-redis-slave successful"
I0401 16:25:08.011977       1 stateful_set.go:419] StatefulSet has been deleted default/flaskapi-redis-master
I0401 16:25:08.019352       1 stateful_set.go:419] StatefulSet has been deleted default/flaskapi-redis-slave
I0401 16:25:10.803114       1 stateful_set.go:419] StatefulSet has been deleted default/alertmanager-flaskapi-kube-prometheus-s-alertmanager
I0401 16:25:11.750713       1 stateful_set.go:419] StatefulSet has been deleted default/prometheus-flaskapi-kube-prometheus-s-prometheus
I0401 16:28:07.719756       1 event.go:291] "Event occurred" object="default/flaskapi-ingress-nginx-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flaskapi-ingress-nginx-admission-create-jjk5d"
I0401 16:28:08.934468       1 event.go:291] "Event occurred" object="default/flaskapi-ingress-nginx-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
I0401 16:28:09.897359       1 event.go:291] "Event occurred" object="default/flaskapi-kube-prometheus-s-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flaskapi-kube-prometheus-s-admission-create-n8c4h"
I0401 16:28:10.967856       1 event.go:291] "Event occurred" object="default/flaskapi-kube-prometheus-s-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
I0401 16:28:11.775787       1 event.go:291] "Event occurred" object="default/flaskapi-kube-state-metrics" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set flaskapi-kube-state-metrics-67c7f5f854 to 1"
I0401 16:28:11.778378       1 event.go:291] "Event occurred" object="default/flaskapi-prometheus-node-exporter" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flaskapi-prometheus-node-exporter-fmktv"
I0401 16:28:11.815273       1 event.go:291] "Event occurred" object="default/flaskapi-kube-state-metrics-67c7f5f854" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flaskapi-kube-state-metrics-67c7f5f854-9628h"
I0401 16:28:11.816409       1 event.go:291] "Event occurred" object="default/flaskapi-kube-prometheus-s-operator" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set flaskapi-kube-prometheus-s-operator-69f4bcf865 to 1"
I0401 16:28:11.817011       1 event.go:291] "Event occurred" object="default/flask-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set flask-deployment-775fcf8ff to 3"
I0401 16:28:11.875579       1 event.go:291] "Event occurred" object="default/flaskapi-grafana" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set flaskapi-grafana-6cb58f6656 to 1"
I0401 16:28:11.877083       1 event.go:291] "Event occurred" object="default/flask-deployment-775fcf8ff" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flask-deployment-775fcf8ff-qpnnj"
I0401 16:28:11.877698       1 event.go:291] "Event occurred" object="default/flaskapi-kube-prometheus-s-operator-69f4bcf865" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flaskapi-kube-prometheus-s-operator-69f4bcf865-wf9rs"
I0401 16:28:11.880669       1 event.go:291] "Event occurred" object="default/flaskapi-ingress-nginx-controller" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set flaskapi-ingress-nginx-controller-ccfc7b6df to 1"
I0401 16:28:11.923398       1 event.go:291] "Event occurred" object="default/flask-deployment-775fcf8ff" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flask-deployment-775fcf8ff-8bg9g"
I0401 16:28:11.923979       1 event.go:291] "Event occurred" object="default/flask-deployment-775fcf8ff" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flask-deployment-775fcf8ff-66vdr"
I0401 16:28:11.930311       1 event.go:291] "Event occurred" object="default/flaskapi-ingress-nginx-controller-ccfc7b6df" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flaskapi-ingress-nginx-controller-ccfc7b6df-5dbp4"
I0401 16:28:11.930387       1 event.go:291] "Event occurred" object="default/flaskapi-grafana-6cb58f6656" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flaskapi-grafana-6cb58f6656-7vn5q"
I0401 16:28:12.146327       1 event.go:291] "Event occurred" object="default/flaskapi-redis-master" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod flaskapi-redis-master-0 in StatefulSet flaskapi-redis-master successful"
I0401 16:28:12.146420       1 event.go:291] "Event occurred" object="default/flaskapi-redis-slave" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod flaskapi-redis-slave-0 in StatefulSet flaskapi-redis-slave successful"
I0401 16:28:15.162857       1 event.go:291] "Event occurred" object="default/flaskapi-ingress-nginx-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flaskapi-ingress-nginx-admission-patch-n27bl"
I0401 16:28:20.233232       1 event.go:291] "Event occurred" object="default/alertmanager-flaskapi-kube-prometheus-s-alertmanager" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod alertmanager-flaskapi-kube-prometheus-s-alertmanager-0 in StatefulSet alertmanager-flaskapi-kube-prometheus-s-alertmanager successful"
I0401 16:28:20.736218       1 event.go:291] "Event occurred" object="default/prometheus-flaskapi-kube-prometheus-s-prometheus" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod prometheus-flaskapi-kube-prometheus-s-prometheus-0 in StatefulSet prometheus-flaskapi-kube-prometheus-s-prometheus successful"
I0401 16:28:26.279860       1 event.go:291] "Event occurred" object="default/flaskapi-ingress-nginx-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
I0401 16:28:27.569767       1 event.go:291] "Event occurred" object="default/flaskapi-kube-prometheus-s-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: flaskapi-kube-prometheus-s-admission-patch-zz86c"
I0401 16:28:30.881778       1 event.go:291] "Event occurred" object="default/flaskapi-kube-prometheus-s-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
I0401 16:28:58.986405       1 event.go:291] "Event occurred" object="default/flaskapi-redis-slave" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod flaskapi-redis-slave-1 in StatefulSet flaskapi-redis-slave successful"

kube-scheduler-minikube logs

I0401 16:06:14.773819       1 serving.go:331] Generated self-signed cert in-memory
W0401 16:06:19.051113       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0401 16:06:19.051415       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0401 16:06:19.051559       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0401 16:06:19.054120       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0401 16:06:19.113331       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0401 16:06:19.114411       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0401 16:06:19.117452       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0401 16:06:19.117690       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0401 16:06:19.126019       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0401 16:06:19.126637       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0401 16:06:19.126923       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0401 16:06:19.127635       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0401 16:06:19.127843       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0401 16:06:19.128110       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0401 16:06:19.128318       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0401 16:06:19.128521       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0401 16:06:19.128734       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0401 16:06:19.130489       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0401 16:06:19.133550       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0401 16:06:19.133672       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0401 16:06:19.969695       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0401 16:06:19.990220       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0401 16:06:20.039662       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0401 16:06:20.135330       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0401 16:06:20.217099       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0401 16:06:20.343999       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0401 16:06:20.377376       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
I0401 16:06:22.017792       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 

Questions Are these scrape targets dependent upon a successful prometheus-node-exporter.hostRootFsMount?

How do I enable etcd, scheduler, controller-manager and kube-proxy(docker-desktop) scrape targets with a helm chart installation of kube-prometheus-stack on macOS Kubernetes cluster running on docker desktop and minikube?

Can such a fix be made to work out of the box when installation via the helm chart?

stale[bot] commented 3 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

aldenjenkins commented 3 years ago

I am having the same issue as well

gregseb commented 3 years ago

Me too, but on Ubuntu 20.04, following targets are unavailable with the default settings. I haven't looked at any of them other than the controller manager yet though. Trying to find why its not working and not really wanting to do something like change the bind address on the pod.

stale[bot] commented 3 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

dcs3spp commented 3 years ago

Still experiencing this issue....

mgj commented 3 years ago

Also experiencing this

stale[bot] commented 3 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

dcs3spp commented 3 years ago

Still experiencing this issue...

stale[bot] commented 3 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

dcs3spp commented 3 years ago

Still experiencing

stale[bot] commented 3 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

dcs3spp commented 3 years ago

Still experiencing

stale[bot] commented 3 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

dcs3spp commented 3 years ago

Still experiencing

cortex3 commented 2 years ago

I had the same error. If your endpoint is not accessible via http but accessible via https try this: https://github.com/prometheus-community/helm-charts/issues/204#issuecomment-765155883 that fixed it for me.

stale[bot] commented 2 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

stale[bot] commented 2 years ago

This issue is being automatically closed due to inactivity.

kingdonb commented 2 years ago

I resolved this issue for myself today, and coincidentally saw that it was closed by the bot yesterday so decided to post. My cluster is a kubeadm standard deployment on two Docker machines (linux hosts of varying distributions)

The trick is to make sure that your four services that are in TargetDown are actually bound and listening on addresses where Prometheus can collect their metrics. As they have evolved from earlier versions of Kubernetes, the defaults have changed and they are not configured for metrics collection by default anymore, if they were ever intended to be. This is not granted or default with kubeadm at least, and from the research I did, you would see that etcd for example requires changes to the manifests that run on the "managed" side of the cluster that may not be obvious or straightforward to apply on arbitrary managed k8s instances that don't necessarily expose you to the details of how etcd or the controller manager runs.

That being said on my cluster it was a kubeadm cluster with default configuration and I got these four TargetDown from kube-prometheus-stack chart default configuration too.

I went into the cluster masters (ok only one master, it's a home lab cluster) and edited /etc/kubernetes/manifests/etcd.yaml which is where kubeadm deploys its static etcd pod from, and found this block:

    - --listen-client-urls=https://127.0.0.1:2379,https://10.17.12.146:2379
    - --listen-metrics-urls=http://127.0.0.1:2381

Looks like I should change it to this:

    - --listen-client-urls=https://127.0.0.1:2379,https://10.17.12.146:2379
    - --listen-metrics-urls=http://127.0.0.1:2381,http://10.17.12.146:2381

I'm revealing internal details of my network, (the Kubernetes master node is 10.17.12.146)

After making this change, I checked the service that was installed by kube-prometheus-stack operator:

kubectl -n kube-system edit svc kube-prometheus-stack-kube-etcd
spec:
  ports:
  - name: http-metrics
    port: 2379
    protocol: TCP
    targetPort: 2379

Knowing full well that metrics don't run on 2379 since we just set the listen to port 2381, I think about changing this to match, but I realize this service was created by kube-prom-stack operator and I probably need to reconfigure something in my Helm chart instead, I added this:

commit 32e56b0d8a455db587f3f7b3e867f2bee4b7198c (HEAD -> monitoring, origin/monitoring)
Author: Kingdon Barrett <kingdon@weave.works>
Date:   Mon Jan 17 13:50:52 2022 -0500

    monitor kubeEtcd port where metrics are listening

diff --git a/manifests/monitoring/kube-prometheus-stack/release.yaml b/manifests/monitoring/kube-prometheus-stack/release.yaml
index b17652a..62b98a5 100644
--- a/manifests/monitoring/kube-prometheus-stack/release.yaml
+++ b/manifests/monitoring/kube-prometheus-stack/release.yaml
@@ -115,6 +115,10 @@ spec:
         podMonitorSelector:
           matchLabels:
             app.kubernetes.io/part-of: flux
+    kubeEtcd:
+      service:
+        port: 2381
+        targetPort: 2381

   postRenderers:
     - kustomize:

Now TargetDown for my etcd service stops alerting. I know I'm on the right track πŸŽ‰

Now I realize my memory is bad and I'm telling the story out of order... editing this configmap in kube-system namespace kube-proxy, I set the metricsBindAddress from '' to 0.0.0.0 and another TargetDown alert is laid to bed.

The remaining services are all deployed from static pods in /etc/kubernetes/manifests/

I edit kube-controller-manager.yaml and kube-scheduler.yaml to both reset their bind addresses, from 127.0.0.1 to 0.0.0.0:

    - --bind-address=0.0.0.0

I read some issue thread where kubeadm stated these were not reasonable defaults as the kubeadm users who care about these metrics are not in the majority, and you should not bind ports which can potentially be picked up by outsiders unless you really care about them.

(so_anyway_i_started_blasting.jpg)

So I set the bind address to 0.0.0.0, since setting my internal address from earlier didn't seem to have any effect, and I reset the kubelet with a sudo systemctl restart kubelet, finally all four of my TargetDown alerts are winding down, and the default kube-prometheus-stack can function normally without any silences, other than the default Watchdog silence πŸ‘ πŸ’―

Hope this story helps someone else. I am afraid I'm not sure of a good way to set these configurations on kubeadm init and I'll have to update my knowledge about kubeadm next time I tear down and reset my cluster, (as these configurations should be possible without making adjustments at runtime / after the cluster has already been provisioned by kubeadm.)

xsoheilalizadeh commented 2 years ago

Could anyone solve this issue on Docker Desktop for Mac?

SpoddyCoder commented 8 months ago

Could anyone solve this issue on Docker Desktop for Mac?

I know the issue is closed & stale, but the answers contained in here are a bit fractured and not easy to piece together, so for those finding this later... working solution as of Feb 2024...

https://gist.github.com/SpoddyCoder/ff0ea39260b0d4acdb8b482532d4c1af

HemjalCF commented 1 month ago

I resolved this issue for myself today, and coincidentally saw that it was closed by the bot yesterday so decided to post. My cluster is a kubeadm standard deployment on two Docker machines (linux hosts of varying distributions)

The trick is to make sure that your four services that are in TargetDown are actually bound and listening on addresses where Prometheus can collect their metrics. As they have evolved from earlier versions of Kubernetes, the defaults have changed and they are not configured for metrics collection by default anymore, if they were ever intended to be. This is not granted or default with kubeadm at least, and from the research I did, you would see that etcd for example requires changes to the manifests that run on the "managed" side of the cluster that may not be obvious or straightforward to apply on arbitrary managed k8s instances that don't necessarily expose you to the details of how etcd or the controller manager runs.

That being said on my cluster it was a kubeadm cluster with default configuration and I got these four TargetDown from kube-prometheus-stack chart default configuration too.

I went into the cluster masters (ok only one master, it's a home lab cluster) and edited /etc/kubernetes/manifests/etcd.yaml which is where kubeadm deploys its static etcd pod from, and found this block:

    - --listen-client-urls=https://127.0.0.1:2379,https://10.17.12.146:2379
    - --listen-metrics-urls=http://127.0.0.1:2381

Looks like I should change it to this:

    - --listen-client-urls=https://127.0.0.1:2379,https://10.17.12.146:2379
    - --listen-metrics-urls=http://127.0.0.1:2381,http://10.17.12.146:2381

I'm revealing internal details of my network, (the Kubernetes master node is 10.17.12.146)

After making this change, I checked the service that was installed by kube-prometheus-stack operator:

kubectl -n kube-system edit svc kube-prometheus-stack-kube-etcd
spec:
  ports:
  - name: http-metrics
    port: 2379
    protocol: TCP
    targetPort: 2379

Knowing full well that metrics don't run on 2379 since we just set the listen to port 2381, I think about changing this to match, but I realize this service was created by kube-prom-stack operator and I probably need to reconfigure something in my Helm chart instead, I added this:

commit 32e56b0d8a455db587f3f7b3e867f2bee4b7198c (HEAD -> monitoring, origin/monitoring)
Author: Kingdon Barrett <kingdon@weave.works>
Date:   Mon Jan 17 13:50:52 2022 -0500

    monitor kubeEtcd port where metrics are listening

diff --git a/manifests/monitoring/kube-prometheus-stack/release.yaml b/manifests/monitoring/kube-prometheus-stack/release.yaml
index b17652a..62b98a5 100644
--- a/manifests/monitoring/kube-prometheus-stack/release.yaml
+++ b/manifests/monitoring/kube-prometheus-stack/release.yaml
@@ -115,6 +115,10 @@ spec:
         podMonitorSelector:
           matchLabels:
             app.kubernetes.io/part-of: flux
+    kubeEtcd:
+      service:
+        port: 2381
+        targetPort: 2381

   postRenderers:
     - kustomize:

Now TargetDown for my etcd service stops alerting. I know I'm on the right track πŸŽ‰

Now I realize my memory is bad and I'm telling the story out of order... editing this configmap in kube-system namespace kube-proxy, I set the metricsBindAddress from '' to 0.0.0.0 and another TargetDown alert is laid to bed.

The remaining services are all deployed from static pods in /etc/kubernetes/manifests/

I edit kube-controller-manager.yaml and kube-scheduler.yaml to both reset their bind addresses, from 127.0.0.1 to 0.0.0.0:

    - --bind-address=0.0.0.0

I read some issue thread where kubeadm stated these were not reasonable defaults as the kubeadm users who care about these metrics are not in the majority, and you should not bind ports which can potentially be picked up by outsiders unless you really care about them.

(so_anyway_i_started_blasting.jpg)

So I set the bind address to 0.0.0.0, since setting my internal address from earlier didn't seem to have any effect, and I reset the kubelet with a sudo systemctl restart kubelet, finally all four of my TargetDown alerts are winding down, and the default kube-prometheus-stack can function normally without any silences, other than the default Watchdog silence πŸ‘ πŸ’―

Hope this story helps someone else. I am afraid I'm not sure of a good way to set these configurations on kubeadm init and I'll have to update my knowledge about kubeadm next time I tear down and reset my cluster, (as these configurations should be possible without making adjustments at runtime / after the cluster has already been provisioned by kubeadm.)

Based on my little knowledge This is the correct way to solve this problem. Thanks